09:07 | jnettlet | dv_, tested out your gstreamer-1 plugins. Everything seemed to work fine on both our mmp2 and mmp3 platforms. With the newer libvmeta.so and libmiscgen.so provided by Marvell for our platform I had to link in librt, and the vdec_so_api_suspend* functions aren't defined so I had to ifdef that code out for testing. Once I got it built everything seemed to work on par or better than gstreamer-0.10 |
10:32 | dv_ | jnettlet: ping |
10:32 | dv_ | jnettlet: nice to hear they work well. I will add some build script tests for vdec_os_api_suspend* , and link rt to the plugin |
10:34 | dv_ | there is one open issue though, one I stumbled upon yesterday. with some videos, with width is not a multiple of 16. the decoder handles this just fine, as does the regular xvimagesink. you'll get a correct image. the vmetaxv sink however does not - the stride is not passed on properly yet |
10:35 | dv_ | jnettlet_: ping? :) |
10:36 | jnettlet_ | dv_, sorry I am working out speed problems with my DSL connection. |
10:36 | dv_ | okay, did you get my messages? |
10:36 | jnettlet_ | yep. You can have my patches if you want them |
10:36 | jnettlet_ | very simple |
10:36 | dv_ | also that about the issue with vmetaxv? |
10:37 | dv_ | ah sure, I welcome patches |
10:37 | dv_ | there is one open issue though, one I stumbled upon yesterday. with some videos, with width is not a multiple of 16. the decoder handles this just fine, as does the regular xvimagesink. you'll get a correct image. the vmetaxv sink however does not - the stride is not passed on properly yet. it is unfortunately rather tricky to do so, since once I add the code for that, other errors appear. the main issue is that this is a "modified" |
10:37 | dv_ | (more accurately, horribly hacked) version of xvimagesink. an xv sink written from scratch for vmeta would be much better, but I am concerned about what to do if I get non-vmeta frames to display then.. |
10:37 | jnettlet_ | I think I have a patch for that case in our vmetaxv repo |
10:38 | dv_ | xvimagesink changed considerably since 0.10 |
10:38 | jnettlet_ | I set different width alignments depending if it is video coming from the vmeta engine or not |
10:38 | dv_ | so 0.10 patches cannot translate 1:1 |
10:38 | jnettlet_ | I understand. |
10:38 | dv_ | yes, I have seen it. 16 if it is vmeta, 8 if it isnt. |
10:39 | dv_ | now, the nice thing about gstreamer 1.0 is that gstbuffers can have metadata blocks. the vmeta generated buffers have a vmeta metadata block, which amongst other things, contains the physical address of the dma memory |
10:39 | dv_ | so I dont need libphycontmem to figure out a phys address out of a virtual one |
10:40 | jnettlet_ | that is nice. |
10:40 | jnettlet_ | so you just need the allocation routines. |
10:40 | dv_ | the complication is that 1.0 introduces buffer pools, and xvimagesink uses its own for creating buffers with xshm as their memory. somehow, when the stride doesnt fit, the order of used xshm buffers does not seem to match the display order, causing strange "flickering" on screen |
10:40 | dv_ | yes |
10:40 | dv_ | just the vdec_* stuff |
10:40 | jnettlet_ | That will make the ION implementation easier as you won't need my custom IOCTL's to handle passing physical addresses to userspace |
10:42 | dv_ | ah, cool |
10:42 | dv_ | I noticed something strange though. it seems as if vmeta itself can support h264 MVC. |
10:42 | dv_ | but I havent seen any flags or constants for this in the vmeta ipp layer. |
10:43 | jnettlet_ | I can open a ticket with Marvell if you would like\ |
10:43 | dv_ | it is just a theoretical thing atm, since gstreamer does not have the facilities to support MVC yet, but its interesting... overall I'd have liked to use the vmeta API directly, but the headers lack a lot of stuff (like codec ID constants, modes etc.) |
10:43 | dv_ | also, I do not know how much work is the ipp layer doing for me :) |
10:44 | dv_ | but thanks for testing! so the suspend stuff is gone in the newer vmeta libraries? should I remove the calls that suspend vmeta in pause and resume when playback starts again? |
10:47 | jnettlet_ | one sec let me grab that laptop. |
10:54 | jnettlet_ | so specifically vdec_os_api_suspend_check and vdec_os_api_suspend_ready don't exist. I am not sure if those were dove only and they aren't in our libvmeta.so because it is built for mmp2/mmp3. Marvell tends to ifdef lots of stuff out and distribute platform specific libraries :-( |
10:54 | jnettlet_ | It will probably be up to us to settle on a "standard" library to base work off of. |
10:55 | dv_ | oh |
10:56 | dv_ | well I will write a test for it in the build script then |
10:56 | jnettlet_ | Your comment carries over a comment that states the suspend/resume after a frame completes is a Dove specific routine |
10:56 | dv_ | yeah, I wasnt completely sure about that, and looks odd |
10:56 | jnettlet_ | Maybe we can remove it and see if it works on the Dove hardware. If everything seems to work just dump it from everything |
10:56 | dv_ | its just some magical and mysterious procedure that you have to do, without knowing why |
10:57 | jnettlet_ | My gut feeling is it is a work-around for flushing a cache somewhere that they couldn't figure out. |
10:57 | dv_ | I have noticed some freezes when using the suspend functions during the gstreamer paused<->playing state transitions. but I do not know if I am calling them correctly in that case |
10:58 | dv_ | yeah, probably |
10:58 | dv_ | anyway, I guess turning down the vmeta clock frequency will be more useful for powersave than using these calls |
11:03 | jnettlet_ | When you suspend are you disabling the clock, interrupts and power? |
11:03 | dv_ | well I am just calling the suspend functions, nothing more |
11:03 | dv_ | I assumed thats what they are for |
11:04 | jnettlet_ | it depends which version of the kernel driver you are using I think. |
11:05 | jnettlet_ | dv_, have you attempted to decode audio with the vMeta chip? According to the marketing info about it, it should be able to decode mp3, aac, and a few others. |
11:05 | dv_ | vmeta? audio? |
11:05 | dv_ | I thought it is video only |
11:06 | dv_ | nothing in the marvell code indicated that |
11:06 | dv_ | if it does, then it would be good news |
11:07 | jnettlet_ | I should probably just ask Marvell. Somehow I ended up with a data/marketing sheet that talked about it decoding audio |
11:07 | dv_ | if you can get in contact with marvell about vmeta, please ask them these questions: (1) what about mvc? (2) more details about vmeta audio please? (3) is there more specific information about what vmeta support? for example, not just "h264", but "h64, profiles baseline/main/high, levels xyz" etc. |
11:10 | jnettlet_ | will do my best. |
11:10 | dv_ | also, there is one type of video stream I did not test yet (didnt have the time): video streams which change resolution during playback |
11:10 | dv_ | I did design the decoder with this in mind, but as said, didnt actually run it with one yet |
11:11 | dv_ | thanks, this would help a lot. |
11:18 | jnettlet_ | I can't find that info on the audio decoding. It might have been incorrect and pulled. It is probably in my email archive somewhere. |
17:12 | _rmk_ | right, I'm not going to bother trying to get anything else into mainline. It's hopeless trying. Mainline maintainers just don't give a damn about this hardware, don't want to know it, don't want the code either. |
17:13 | _rmk_ | I'm personally getting to the point of thinking about jumping over to one of the BSDs and seeing whether attitudes there are any better |
17:14 | jnettlet | _rmk_, still problems with your drm driver? |
17:14 | _rmk_ | problems with the kernel side of vmeta... I've mostly given up with drm... now Mark's being an arse over the kirkwood audio stuff |
17:15 | _rmk_ | its not like I haven't explained to Mark (and Liam) many times about the structure of the kirkwood hardware |
17:15 | _rmk_ | both by email and irc |
17:15 | jnettlet | well the asoc stuff is a complete nightmare |
17:16 | _rmk_ | Liam sent me patches for a driver he's working on which uses the new dpcm stuff... which is even more incomprehensible than the dapm stuff |
17:16 | jnettlet | hahaha....great |
17:17 | jnettlet | I guess this is why Google has frozen Android at 3.4, and most devices still run 3.0 |
17:18 | jnettlet | things were esoteric but they did work. Now ARM audio seems more like black magic |
17:18 | _rmk_ | so you also struggle to understand asoc? |
17:20 | jnettlet | myself and multiple others have spent way too much time figuring out why asoc was twiddling codec registers and bias settings |
17:20 | dv_ | isnt asoc this alsa SoC stuff? isnt it work-in-progress? |
17:21 | _rmk_ | dv_: its existed since 2006, so I guess it's still a "young" subsystem :) |
17:21 | dv_ | well, if it is as lovely designed as the alsa API, then I guess it can induce headaches |
17:23 | jnettlet | I think the problem is they are trying to abstract out an interface to hardware that is implemented so differently by the vendors the abstraction becomes more complex than just writing and maintaining individual drivers. |
17:23 | dv_ | oh, that design pitfall I know very well |
17:23 | dv_ | trying to press an abstraction layer onto something that cannot be abstracted very well |
17:25 | dv_ | usually, the better choice is to provide some kind of building blocks which you can use in your own individual driver |
17:28 | dv_ | _rmk_: is there an lkml link to the discussions about vmeta and kirkwood you've had? |
17:55 | _rmk_ | finally, Mark's provided some input on the DPCM stuff at long last, so I at least have a vague idea how it may be used to solve this stuff |
19:02 | _rmk_ | well, it looks to me like I have the kirkwood-i2s layer correct to Liam's DPCM driver despite what Mark says |
19:39 | shesselb | 19:39 * shesselba is getting tired of getting non-sense, off-topic replies |
19:43 | shesselba | _rmk_: retesting the tda998x patches; I'll send hopefully today. Also fixed audio not playing on mode with pixclk > ~100MHz. |
20:00 | _rmk_ | shesselba: me too |
20:01 | _rmk_ | shesselba: it turns out, having re-analysed Liam's code, that there's nothing wrong with the kirkwood-i2s changes, it's only a small change needed to each of the DAI links |
20:03 | shesselba | ok, have audio working with JF's patches already. Will do my tda998x patch testing and then come back to proper kirkwood-i2s with DT |
20:03 | shesselba | will send one last reply to Mark *g* |
20:03 | _rmk_ | basically, I don't think Mark bothered to understand that before commenting on my patches |
20:03 | shesselba | IMHO he still doesn't |
20:04 | shesselba | he keeps noting things that are *in no way* related to my questions |
20:05 | _rmk_ | I never had any response to many of the questions I sent over the weekend either |
20:06 | shesselba | Just for clarification, with "thinko" he means "thought wrong" ? |
20:06 | shesselba | like typo? |
20:06 | _rmk_ | yes |
20:06 | shesselba | pfft |
20:07 | shesselba | why are we merging kirkwood-dma and kirkwood-i2s then, instead of having two nodes in the first place? |
20:08 | _rmk_ | well... I'd recommend merging them because their separation is purely an early asoc imposed structure - they're both really the same hardware |
20:08 | _rmk_ | all that this block is in reality is a DMA engine which outputs an I2S and SPDIF formatted stream at a specified rate |
20:08 | shesselba | of course we want them merged, as separate device nodes will _never_ be accepted by any DT maintainer |
20:09 | _rmk_ | yes. thankfully he's taken my patch to do that. |
20:09 | shesselba | yeah, I am just in ranting because he keeps telling me that there is a relation between driver and device node |
20:10 | _rmk_ | rotfl. this doesn't work. |
20:10 | shesselba | what on earth is so special with audio? we even agreed on a non-super-node requirement for video |
20:11 | shesselba | I guess he keeps projecting ASoC onto DT |
20:12 | dv_ | say, this weird bmm hack in the x11 drivers with the abused shm buffers, is this here to stay? |
20:12 | dv_ | or is something better coming? |
20:51 | _rmk_ | looks like the multiple creation of widgets and overwriting of DAI widgets problem is back |
20:52 | shesselba | didn't Mark just applied your patch? |
22:09 | _rmk_ | well... basically... this stuff that Mark wants me to use doesn't work at all |
22:09 | _rmk_ | its going to need quite a few hacks to codecs and the asoc core |
22:16 | shesselba | _rmk_: nah, it is always the others |
22:25 | dv_ | is the mainlining process always this frustrating? |
22:25 | dv_ | or is it specific to asoc right now? |
22:32 | _rmk_ | dv: god knows, but I'm getting really frustrated with this |
22:37 | jnettlet_ | It has been pretty tiresome for the OLPC code, and a lot of our stuff is just for our hardware. |
22:39 | jnettlet_ | I just saw a patch for our DCON driver to change a variable type so it would work properly on big endian systems. The DCON is OLPC's custom hardware I don't think there is any chance of it being put on a big endian system. |
22:43 | shesselba | jnettlet_: armada drm is not considered to be taken, also there is no DCON driver yet. Feel free to object to any structs early :) |
22:46 | _rmk_ | now we get to the real reasons... |
22:46 | _rmk_ | 21:41 < broonie> I'm not that concerned about anonymous people, or TBH about kirkwood stuff. |
22:47 | jnettlet_ | shesselba, I started merging my KMS work into the most recent armada drm. I also have a thunderstone board so I can add support for mipi outputs...but not right away. |
22:48 | jnettlet_ | we decided that the DCON will always be a standalone device for us. Integrating it into the geode kernel and userspace code was just a bad idea. |
22:48 | dv_ | _rmk_: sounds pretty arbitrary |
22:48 | dv_ | he doesnt care about kirkwood, so it wont be mainlined? |
22:57 | _rmk_ | dv_: that's about the message I'm getting |
22:58 | dv_ | who is he? |
22:58 | _rmk_ | ASoC maintainer |
22:59 | dv_ | and it is now a kernel policy that soc audio *must* conform to asoc standards? |
23:00 | _rmk_ | we could convert it to ALSA instead |
23:33 | dv_ | hm it seems my decoder can handle changing resolutions |
23:35 | dv_ | I guess though that this use case is pretty rare. the most likely candidates for this are TV networks, and they just change the aspect ratio in all cases I've seen |