IRC log of #cubox of Fri 06 Dec 2013. All times are in CET < Back to index

05:35 nutcase_ Anyone there ?
05:36 nutcase_ Does anyone know if the BOOT mode of the Cubox-i can be changed ? eg. boot from SATA, Serial etc..
05:39 nutcase_ exit
05:47 nutcase_ !U
10:22 SYS64738 hi
10:27 SYS64738 I've installed geexbox, it's normale the warning in the console: Invalid state for function? in cec_listen_single line xxx ?
10:31 SYS64738 http://mubox.voyage.hk/
10:32 jnettlet SYS64738, do you have a CEC compatible HDMI device plugged in?
10:32 SYS64738 ah ok understood
10:32 SYS64738 is headless for now
10:32 jnettlet If not then it is probably a harmless warning that should be silenced, I would file a bug
10:33 jnettlet with geexbox, not solidrun.
10:33 SYS64738 thank you
10:33 jnettlet I think that will go away when you plug your hdmi port in
10:34 SYS64738 is there an url with a comparative tables between OS supported by cubox ?
10:34 jnettle 10:34 * jnettlet shrugs. probably
10:53 _rmk_ jnettlet: I've been thinking last night about converting this BMM stuff to basically just be a dma_buf allocator - and require importing into the vmeta driver to get at the buffers phys addresses
10:54 _rmk_ and if you want to mmap the buffer, you have to mmap the dma_buf fd
10:54 _rmk_ (which is something dma_buf already supports)
10:54 jnettlet _rmk_, funny enough I had thought about doing the physical address lookup in the vmeta driver before adoption ION a while ago.
10:54 _rmk_ it gets us all the refcounting we need already because dma_buf already does that
10:55 _rmk_ it also gets us the sharing too
10:55 jnettlet yep, but is bmm needed then?
10:56 _rmk_ only to be a provider of dma_bufs
10:56 jnettlet shouldn't v4l already have that?
10:56 _rmk_ if you want to tie vmeta into v4l stuff
10:57 jnettlet videobuf2-dma-contig.c
10:57 jnettlet I can't remember if it is a standalone interface
10:59 _rmk_ hmm, that seems to only do non-cached mappings
10:59 jnettlet isn't that what we want for vmeta to do zero copy?
10:59 _rmk_ we ideally want the stream to be write combining
11:04 jnettlet yeah, I don't see any reason why this interface shouldn't support that.
11:07 jnettlet _rmk_, maybe we should ask dv_ what he thinks would work best, since he is generally elbow deep in this.
11:08 jnettlet I think there is already gstreamer support for doing zero copy videobuf overlays
11:13 jnettlet _rmk_, what are the chances of upstream accepting your char based vmeta driver?
11:14 _rmk_ slim because of the closed userspace
11:19 jnettlet I am just wondering if we should just roll all of this into a self contained driver, build and modprobe for support.
11:20 jnettlet then it can live happily regardless of kernel version, minus api changes, and it is much easier to package.
11:22 jnettle 11:22 * jnettlet is tired of chasing the dragon.
11:22 dxld
11:24 dv 11:24 * dv_ reads
11:24 dv_ hmm
11:24 dv_ write combining is essential, yes
11:25 _rmk_ jnettlet: yea, I also thought about converting vmeta to expose bmm via its /dev interface too
11:25 dv_ otherwise elements that generate frames with CPU usage can end up being slower if they write into DMA buffers
11:26 dv_ I have observed this already. videotestsrc generates an SMPTE test image, but it is faster to let it write the image into the heap and then memcpy it to a dma buffer than to write it to the buffer directly,
11:26 dv_ because of the access patterns
11:26 dv_ writecombine fixes this
11:26 _rmk_ dv: ok, so that rules out the v4l2 allocator
11:26 dv_ hmm, what about that allocator?
11:27 _rmk_ videobuf2-dma-contig.c
11:27 dv_ is it uncached only?
11:27 _rmk_ looks that way
11:27 dv_ hm perhaps the best combination would be writecombine for buffers that are written into, and cached for readonly ones
11:28 dv_ but writecombine alone is enough already
11:29 dv_ speaking of v4l, this was a part in the original fsl gstreamer plugins that I didnt fully understand. you cant simply give v4l a dmabuf/phys addr for a frame that it shall render from?
11:30 dv_ the fsl variant of v4l I mean, which does seem to accept physical addresses.. what I have seen instead is that it needs some kind of v4l objects, which are created with v4l allocators. have you seen some kind of physaddr/dmabuf <-> v4l interface so that the v4l sink can be used for decoded video output?
11:38 jnettlet dv_, I thought that was the idea of a mem2mem device.
11:38 dv_ maybe. I am not very familiar with v4l.
11:39 jnettlet I have not kept up on this stuff either.
11:39 dv_ I was just trying to make sense of it, in particular in combination with the freescale changes
11:41 jnettlet _rmk_, do you have any idea who is responsible for dictating the type of backing buffer used for a dmabuf?
11:42 jnettlet mem2mem seems great but if the camera passes vmeta a sg back dmbabuf that needs to be converted first.
11:42 jnettle 11:42 * jnettlet thinks this all needs better documentation.
11:42 jnettlet or at least centralized documentation.
11:43 dv_ that would be nice, because right now I only have the eglvivsink, which uses the GPU
11:43 dv_ and well, the ipusink, but that one is not useful in an x11/wayland enviroment
11:46 jnettlet how does the eglvivsink perform?
11:46 dv_ it uses vivante direct textures
11:47 dv_ these do colorspace conversions in the GPU, and accept physical addresses
11:47 dv_ so if the video is stored in dma buffers, it can give you smooth 1080p playback
11:47 dv_ downside: it uses the GPU, and therefore potentially increases power consumption more than a pure 2D sink would
11:55 _rmk_ jnettlet: the exporter - and the importer can only refuse to import a buffer which doesn't match its requirements
11:56 jnettlet sane, but blech. seems like a handshake should be made. I guess that is what gstreamer would theoretically do.
11:57 jnettlet looking at the elements of the pipeline and resolving to the common back buffer method.
11:58 jnettlet s/method/format/
11:59 _rmk_ that's fine if you have just a 1:1 linking, but what if you have 1:many
12:00 dv_ gstreamer imposes an asymmetric model. downstream has to adapt to upstream, but can tell upstream what it can support.
12:01 jnettlet wouldn't 1:many always resolve down to the decoders preferred format and the consumers would have to deal? With the penalty you losing mem2mem for devices that don't support the output format.
12:01 dv_ dataflow goes from up- to downstream)
12:02 jnettlet to me it seems like the encoder/decoder bin should dictate what it needs and can do and then let source and destination adapt.
12:02 dv_ yes, this is what is being done
12:03 dv_ the decoder is however restricted by what downstream is capable of doing. so if a sink can playback RGB, YV12, and I420 for example, then an upstream element - say, a decoder - can pick one of these
12:03 dv_ the sink then has to accept this
12:04 jnettlet for formats yes, but not for type of buffers
12:04 dv_ type of buffers?
12:04 dv_ ah you mean dmabuf , regular buffers on heap..
12:04 jnettlet s/g vs coherent vs writecombine
12:04 jnettlet cached, buffered etc.
12:05 dv_ well in gstreamer 1.2 additional information is available to do exactly that
12:05 jnettlet oh that is great I didn't know they added that.
12:05 dv_ a decoder can mark its output with "eglimage" for example, indicating that the buffers contain eglimages
12:05 dv_ this way, a sink that supports eglimage buffers is preferred
12:06 dv_ but for the VPU stuff its not that necessary
12:06 dv_ its nice to have, because then I can mark the eglvivsink as "accepting dma buffers", but not essential
12:07 dv_ but you are right about one other thing: if writecombine/uncached is part of the format specification, then it is possible to let downstream offer upstream buffer pools, and upstream can reject them if they deliver uncached buffers
12:07 jnettlet The only benefit I see for switching to v4l is to support zero copy encoding/decoding from a camera.
12:07 dv_ (in gstreamer, elements can offer buffer pools to other elements)
12:08 jnettlet right, which is what I want to do.
12:08 dv_ well, what about power consumption?
12:08 bencoh if you're ever considering gstreamer, you might want to have a look at upipe
12:08 dv_ I already support buffer pools. in fact, the internal VPU buffer pool logic clashes with mine, and I had to add tricks to work around it
12:09 bencoh (I'm being a bit partial here, but you really might want to give it a try)
12:10 dv_ ah, similar to mediastreamer2 ?
12:11 jnettlet bencoh, well we already have gstreamer support, mostly thanks to dv_.
12:11 dv_ jnettlet: funny thing is, freescale already has (or had?) plans for an 1.x port
12:12 dv_ we never heard from it until hste saw a post on the freescale community page
12:12 jnettlet dv_, I saw that revelation the other day. If these companies would just work in the open they would save themselves a lot of money
12:12 dv_ yeah. they probably have several division which dont know much from each other
12:13 dv_ bencoh: so this is a generic framework? or with focus on multimedia?
12:16 dv_ I think it would make sense to measure power consumption of a simple videotestsrc -> eglvivsink pipeline vs. videotestsrc -> imxipusink
12:16 dv_ if the figures are similar, then I wont spend time on v4l
12:17 bencoh dv_: with focus on multimedia
12:17 bencoh but the core is quite generic
12:17 dv_ bencoh: is this your work?
12:17 bencoh part of it is mine, yeah
12:18 dv_ then pay a lot of attention to audio synchronization once you start working on audio output modules
12:18 dv_ I say this because this is one area where you need to design things for this from the start
12:18 bencoh well, we do
12:19 bencoh we're an iptv/broadcast company; our transcoder is based on upipe
12:19 dv_ I mean, typically, audio is driven by a different quartz crystal than the rest of the system
12:19 dv_ ah
12:19 dv_ okay then you are aware of clock drift
12:19 bencoh a bit ;)
12:20 dv_ typically it is solved in one of three ways: (a) observe/accumulate the drift, and once it reaches a certain threshold, cut samples, or insert nullsamples
12:20 dv_ (b) resample the audio data by a few ppm (requires a good async resampler)
12:20 dv_ (c) put a PLL between audio crystal and I2S line
12:20 bencoh dv_: http://upipe.org/doc/Overview.html#_Clocks
12:21 bencoh hm, indeed, audio sinks are a bit more tricky than what we usually have for transcoders
12:22 dv_ a good approach is to let a thread run and continuously playback,
12:22 dv_ and measure timestamps once playback of a segment has been completed
12:23 dv_ so, if for example you just played a segment that covers 500ms of sound, but the audio device played this in 400 ms, then you know it is faster than the main clock
12:23 bencoh dv_: our alsa "sink" does something like that
12:23 dv_ I didnt find it. where is it?
12:23 dv_ ah stupid me. upipe-alsa :)
12:24 dv_ bencoh: I suggest this use case to really stress test both the RTP support and the audio sync support:
12:24 dv_ (1) get at least two embedded devices with simple I2S output (that is, no fancy USB audio chip)
12:25 dv_ (2) write some code using your stuff that generates a short sine pulse (say, 20ms), followed by silence (about 80ms)
12:25 jnettlet dv_, _rmk_, I think using v4l2 is a bit useless for us. It really looks like all the decoding handling is done in the kernel which we know isn't happening.
12:25 dv_ (3) watch in the oscilloscope how the pulses align
12:27 dv_ this tests: audio sink clock drift handling, time synchronization (you need to synchronize the clocks of the receiver device(s) to match the sender's), rtp jitter handling, rtp duplicate buffer handling
12:27 bencoh dv_: hm that might be something we'd need at some point to achieve/assert accuracy, true
12:27 jnettlet How do you guys like this methodology. a standalone vmeta module and userspace library. We give the kernel module the ability to allocate memory through CMA, and resolve physical addresses of dmabuf's passed to it.
12:27 dv_ currently, I can achieve <2ms phase difference between receivers in gstreamer. but that took a long time to get there.
12:28 jnettlet Then we give libvmeta the ability to both import and export dmabufs for the binary driver to use.
12:28 dv_ jnettlet: hmm. sounds nice.
12:28 dv_ but what is the distinction here? what does the module do, what does the userspace lib do?
12:28 jnettlet then to add vmeta support a distro just needs to compile an out of tree driver, and a library and package it up
12:29 dv_ well, this sounds like what I had in mind. a kernel that is mainlined as much as possible, and for vmeta etc. you just add a kernel module
12:29 jnettlet yep. The only downside is this is tossing out ION and libphycontmem for something more self contained.
12:30 dv_ bencoh: really this is just more of a final acid test. so many things are involved that considering it a near-time goal is unreasonable.
12:30 jnettlet but if nothing is getting accepted upstream then this is much easier to maintain, and possibly model the imx stuff after
12:30 dv_ jnettlet: makes sense
12:30 bencoh dv_: yup
12:30 dv_ ah. something came up. bbl.
12:31 jnettle 12:31 * jnettlet has to head out as well for a bit.
12:31 jnettlet _rmk_, let me know how you feel about that.
12:31 jnettlet it mostly models after the work you have already done, with just a bit of consolidation.
12:39 _rmk_ jnettlet: let me play for a bit :)
12:51 dv_ ok back briefly
12:51 dv_ jnettlet: one more note about the buffers:
12:52 dv_ I inspect the memory blocks inside the gstreamer buffers and see if they are physically contiguous (there are flags for that)
12:52 dv_ if so, I pass them on directly, achieving zerocopy. if not, I use a previously allocated dma buffer and copy the pixels into it
12:53 dv_ so worst case, things are copied around. playback still occurs.
13:08 dv_ hi rabeeh
13:08 dv_ I think you mentioned a power consumption increase when using the GPU. by how much does it increase?
13:09 MarcusVinter I can test it now if you know of a decent command to stress the GPU
13:10 dv_ hm there are some examples by vivante
13:10 dv_ perhaps they are useful
13:11 dv_ (in /opt/ )
13:11 dv_ or, you could build my plugins and compare two pipelines
13:11 dv_ nah, scratch that. thats not a stress test.
13:12 MarcusVinter But the ones by vivante in /opt/ will?
13:14 dv_ I think so
13:14 dv_ it should be /opt/viv_samples/
13:14 MarcusVinter Thanks. Ill run it now
13:15 MarcusVinter 02 watts before.
13:15 dv_ now the tricky thing is to find out how much of the power consumption is due to the GPU
13:15 MarcusVinter Yeah :/
13:16 MarcusVinter I dont have viv_samples. I just remembered I'm not using the default install
13:17 MarcusVinter I can run a flash video though?
13:17 MarcusVinter Youtube video
13:18 hste_ MarcusVinter: Try glxgears if you have that
13:19 jnettlet dv_, rabeeh and I have been bouncing that back and forth. You can get a baseline sending the output to the fakesink of encoding power.
13:19 dv_ hste_: no that wont work
13:19 jnettlet but then the question is how much of the remaining increase is gpu vs ddr because of the higher DDR throughput needed for the gpu
13:19 dv_ or hmm perhaps it does if the vivante libGL finally works
13:19 jnettlet vivante libGL will handle glxgears
13:20 dv_ okay
13:20 hste_ Using jas-hacks_ rootfs the gl work
13:20 jnettlet but ideally you want to use the glesv2 benchmark to reduce software abstraciton overhead
13:20 hste_ It has also glmark2-es2
13:20 dv_ you mean glmark2?
13:20 jnettlet a lot of the libGL stuff will require cpu fiddling to get it ready for the GLESv2 hardware
13:20 dv_ okay
13:21 jnettlet yeah glmark2-es2 is probably the safest bet right now
13:21 dv_ so this can tell me if eglvivsink is enough, or a second sink is necessary for low power consumption.
13:21 dv_ but on the dove chipset, we already know doing it with egl/gles draws significantly more power, right?
13:21 jnettlet dv_, ultimately we probably want the second 2d sink regardless as there are vivante chips out there that only do 2d
13:22 hste_ I can tell it produce a lot more heat when using the gpu
13:22 dv_ ah good point
13:22 MarcusVinter Instlling glmark-2
13:22 dv_ MarcusVinter: okay, perhaps try my plugins then too
13:22 dv_ its a more real-world like test
13:22 MarcusVinter How may I aqquire them?
13:23 dv_ so, one stress test with glmark2, and one real-world one with my stuff
13:23 MarcusVinter acquire*
13:23 dv_ get them from https://github.com/Freescale/gstreamer-imx
13:23 MarcusVinter Thanks
13:23 dv_ you'll need a cross compiler and the listed dependencies
13:23 dv_ what do you use? ubuntu? yocto?
13:23 jnettlet because the GPU also has dvfs support, so it is possible 1080p video may not clock up the GPU as much as a full 3d benchmark app
13:33 MarcusVinter I'm just gonna grab lunch.
13:58 jnettlet _rmk_, just realized that I will need to keep libphycontmem compatible with whatever new driver we end up with so Marvell's vmeta optimized flash plugin continues to work for our deployments
13:58 jnettlet not the end of the world, just more of a reminder to myself.
13:59 jnettlet which reminds me I need to fill out the paperwork to see if Adobe will allow me to redistribute it.
14:04 _rmk_ jnettlet: oh, there's a flash plugin... for armhf? where can I get that?
14:05 _rmk_ oh, not yet distributable :p
14:48 zombiehoffa did the cubox-i preorders start shipping end of november like in the nov 19th email update?
14:54 jnettlet_ zombiehoffa, yep for about a week now I think
15:07 zombiehoffa jnettlet_, nifty, I ordered later so that's probably why I haven't gotten tracking info yet, hoping for soon;) Super pumped that you guys are way better at preorders than butterfly labs;)
15:08 dv_ butterfly labs?
15:09 dv_ who are they, and what about them?
15:09 zombiehoffa asic bitcoin miner. They did preorders and promised delivery in 3 months and took 1.5 years to deliver, all while still taking preorders
15:09 zombiehoffa pretty much the definition of terrible.
15:09 dv_ ah right.
15:10 zombiehoffa Then, as soon as they had their first gen asic miners out the door, they immediately made them worthless by publishing preorders for second gen which was 10-20 times fasters than first gen.
15:10 dv_ the problem I have with bitcoin miners is that they cant be used for anything else
15:10 dv_ so if the bitcoin bubble bursts, you are sitting on useless hardware
15:10 zombiehoffa dv_, yah, me too. I only bought the one. It's paid for itself twice already.
15:10 zombiehoffa I'm not getting involved anymore, since it looks like tulip mania now
15:10 zombiehoffa was just an interesting experiment for me.
15:11 zombiehoffa only put ~1300 into it.
15:11 zombiehoffa it's paid a little over 2 grand in three months
15:11 zombiehoffa should still keep returning for a while yet, until third gen asic miners make the power usage uneconomical
15:12 zombiehoffa I'm hoping they tulip mania continues long enough for me to quadruple my investment;) I keep pulling money out every chance I get, cash in hand instead of in some shady exchange.
15:12 dv_ I should have mined some coins with a GPU back in 2009..
15:13 jnettlet 15:13 * jnettlet_ agrees. heck I should have just bought some in 2011
15:13 zombiehoffa yah, makes me feel stupid;) I never got involved. Pissed a few friends off when they asked what 1 piece of advice I would give my younger self and I said "buy bitcoins"
15:14 jnettlet 15:14 * jnettlet_ wonders how horribly the Vivante GPU does with a CL based miner
15:14 zombiehoffa gpu mining isn't profitable anymore..
15:15 _rmk 15:15 * _rmk_ pretty much refuses to get involved with bitcoin stuff
15:15 _rmk_ I keep getting "tip4commit" emails which I pretty much hate
15:16 jnettlet_ yeah I have gotten a few of those
15:16 _rmk_ "You were tipped for your commit. Please, log in and tell us your bitcoin address to get it."
15:16 zombiehoffa _rmk_, I don't think it's the time to get involved anymore. Now is the time to suck up money from the greater fools on any btc length you might have and head for the door
15:17 _rmk_ if someone wants to donate something to me, use a hard currency not this crappy bitcoin which will probably become worthless in no time because of the amount of "mining"
15:17 zombiehoffa tip4commit is legit isn't it?
15:17 zombiehoffa take the money man..
15:17 _rmk_ mining is effectively no different from "lets print our own money"
15:17 zombiehoffa if you are lucky enough to live in a city with a bitcoin atm you can just turn it to cash
15:17 _rmk_ I'm not.
15:18 _rmk_ bitcoins are utterly useless to me
15:18 jnettlet_ _rmk_, but that is basically the same thing modern banking systems do.
15:18 zombiehoffa _rmk_, are you in the usa? There are bunch of exchanges that are cheap to get your c oins out of for us guys
15:18 zombiehoffa and if you are in japan it's cheap to get your money out of mtgox
15:19 _rmk_ jnettlet: generally, it's frowned upon to print your own currency... the authorities take a severe dislike to it
15:19 _rmk_ zombiehoffa: nope, uk
15:19 zombiehoffa bitcoin is flawed. It won't work. Deflation is built into the currency and eventually the transaction fees of transferring money will cost more than the transactions you are paying for.
15:19 jnettlet_ _rmk_, of course unless you are doing it for the government. aka The Fed.
15:19 zombiehoffa That's what will eventually kill it if it doesn't collapse in a tulip mania tribute.
15:20 _rmk_ jnettlet: btw, my vlc is currently running with a dmabuf based bmm :)
15:20 jnettlet_ great!
15:20 jnettlet_ CMA backed yet?
15:20 jnettlet_ :-P
15:20 _rmk_ once I update the gstreamer stuff on my ubuntu 12.04, I can kill *lots* of code - basically everything that Marvell ever wrote :)
15:21 _rmk_ and yes, with it working with dmabufs, it'll be much easier to back it with CMA
15:21 zombiehoffa _rmk_, https://bitbargain.co.uk/
15:21 _rmk_ because I no longer have the BMM API accounting to worry about
15:22 jnettlet_ is it in a state such that I can still adapt libphycontmem to work with it for legacy support?
15:23 _rmk_ my vmeta_lib.c basically contains this for the allocation:
15:23 _rmk_ fd = bmm_dmabuf_alloc(size, attr, align);
15:23 _rmk_ if (fd < 0) {
15:23 _rmk_ dbg_printf(VDEC_DEBUG_MEM, "%s: %s\n",
15:23 _rmk_ __FUNCTION__, strerror(errno));
15:23 _rmk_ return NULL;
15:23 _rmk_ }
15:24 _rmk_ that allocates the dmabuf, returning its fd as a descriptor to it
15:24 _rmk_ I'm hacking a bit next...
15:24 _rmk_ if (bmm_dmabuf_phys(fd, &paddr) < 0) {
15:24 _rmk_ bmm_dmabuf_free(fd);
15:24 _rmk_ return NULL;
15:24 _rmk_ }
15:24 _rmk_ that gets its physical address - ideally we want to pass the fd into vmeta to get that though
15:24 _rmk_ ptr = bmm_dmabuf_map(fd, 0, size);
15:24 _rmk_ if (ptr == NULL)
15:24 _rmk_ bmm_dmabuf_free(fd);
15:25 _rmk_ and to free it:
15:25 _rmk_ int fd;
15:25 _rmk_ fd = bmm_dmabuf_fd(ptr);
15:25 _rmk_ if (fd == -1)
15:25 _rmk_ return;
15:25 zombiehoffa _rmk_, if you don'ta want your tip4commit btc, I will happily turn them into beer for you;)
15:25 _rmk_ bmm_dmabuf_unmap(ptr);
15:25 jnettlet_ okay I can easily adapt libphycontmem to support that. great.
15:25 _rmk_ bmm_dmabuf_free(fd);
15:25 _rmk_ I pretty much decided we need to be able to go from virtual address to fd - which is all done using a rbtree
15:26 jnettlet_ just about to ask that.
15:26 jnettlet_ that all makes sense.
15:26 jnettlet_ this all seems small enough that it can be done in the vmeta kernel module.
15:26 unununium_ Hello!
15:27 jnettlet_ that would make this all quite maintainable.
15:27 _rmk_ jnettlet: yep, I'm definitely thinking about exporting the kernel side such that the vmeta driver can expose the stuff via its chardev
15:28 _rmk_ I currently have only three ioctls to support this: one to do the allocation, one for cache flushing (which I've not yet fully implemented) and one to get the phys address - the last of which is just a hack while I prove this can work
15:28 jnettlet_ what are your thoughts on supporting importing dmabufs for source of destination buffers?
15:29 jnettlet_ s/of/or/
15:29 _rmk_ well, the advantage with this is it's something that becomes possible - the downside is vmeta's restrictions. buffers must be contiguous and I believe stream buffers must be a multiple of 64K
15:30 jnettlet_ all buffers have to be contiguous? I thought each buffer just needed to be contiguous
15:30 _rmk_ each buffer
15:31 _rmk_ with dmabuf import in vmeta, we certainly _could_ allocate scanout buffers in drm, and pass them into vmeta for its output
15:31 _rmk_ you'd need to go in at the vmetahal library to do that though
15:32 _rmk_ (which is basically what I'm doing for vlc)
15:32 jnettlet_ right, and then I could switch the OLPC camera to use coherent memory and pass dmabufs directly into vmeta for encoding the image to jpeg
15:32 _rmk_ ... suddenly this approach makes complete sense :)
15:33 jnettlet_ and it will actually be maintainable without a patched kernel
15:34 jnettlet_ \0/
15:37 dv_ care to summarize? I got lost in the discussion
15:43 jnettlet_ dv_, sure
15:45 jnettlet_ it is pretty much what I laid out earlier after looking at the v4l stuff. A single kernel module for vmeta support that does memory allocations through CMA and passes them to userspace as a dmabuf fd.
15:45 dv_ why is this vmeta specific?
15:46 dv_ sounds like it could be used 1:1 for imx as well
15:46 _rmk_ hmm, libcodecvmetadec needs vdec_os_api_get_va()
15:46 jnettlet_ we can implement a similar infrastructure for imx
15:48 jnettlet_ _rmk_, what are your thoughts on leaving bmm as a standalone module in order to support imx chipsets as well?
15:50 jnettlet_ _rmk_, that is fine. we just need to patch libvmeta to use the new api
15:50 jnettlet_ or still use libphycontmem to interface with the new kernel API
15:54 _rmk_ we could do both :)
15:56 purch should serail UART send something on power up even without sd card in?
15:56 purch carrier1
15:56 jnettlet_ purch, nope it just sits there with a stupid blank look on its face.
15:58 purch ok, lets try then the binary u-boot from wiki pages
15:59 jnettlet_ purch, did you try to build your own and it didn't bootstrap?
15:59 purch yeah
15:59 purch I try these http://download.solid-run.com/pub/solidrun/c1/kernel/initial/images/
16:03 jnettlet_ purch why don't you try my u-boot image. let me get the URL for you.
16:03 purch sure
16:10 jnettlet_ purch, https://plus.google.com/112696520735663897193/posts/5bwRsvryYuf
16:11 jnettlet_ I think that link should have most the latest u-boot changes.
16:16 purch hmm, imx file
16:17 purch have to look at mkimage
16:17 purch aaa, dd is enough
16:18 purch out now and more later
17:12 wumpus jnettlet_: the Vivante GC2000 OpenCL does not support enough instructions to run the bitcoin miner, but yeah performance would suck anyway :)
17:13 jnettlet_ wumpus, that is what I figured
17:13 jnettlet_ but what about performance per watt?
17:16 wumpus I don't know, desktop GPUs are power-hungry monsters but they're also hugely parallel and fast, so they may still win on the performance/watt front
17:18 dv_ performance/watt is why people buy bitcoin ASICs
17:19 wumpus yes, ASICs are better in that regard as they're specialized
17:21 wumpus bitcoin is pretty neat but I've never really been into mining
17:24 zombiehoffa wumpus: no contest. the ~400 dollar ati cards do about 350 - 400 mhash/sec, the 1200 bfl asic I bught does 60300 mhash/sec
17:25 zombiehoffa it uses about the same amount of electricity as one 400 $ ati card
17:26 neofob it would be cool if people could use those toys for Folding@Home or something similar
17:27 zombiehoffa yah, I would consider switching it over if it were possible once it becomes uncompetitive, which should happen in about a year.
17:32 zombiehoffa probably not possible though.
17:33 zombiehoffa unless you could convert the folding@home problem into something that benefits from a lot of sha-2 hashes being crunched...
17:33 wumpus the point of the ASICs is that they have the (pretty simple) logic directly encoded in the silicon, if you could switch them over to generic other tasks they'd be more like another kind of GPU
17:34 wumpus which would be less efficient at the specific task
17:36 zombiehoffa right, as I said, probably not possible unless you can convert the folding@home problem to something that benefits from super fast sha-2 crunching.
17:36 purch jnettlet_: this should work: sudo dd if=u-boot.imx of=/dev/sdb bs=512 seek=2 skip=2 conv=fsync
17:37 purch still nothing to screen console
17:42 purch I put V in pin 1, G to pin 6, TX to pin 8 and RX to pin 10
17:56 rabeeh purch: for the mainline u-boot keep the 'skip=2' and remove the 'seek=2'
17:56 rabeeh sudo dd if=u-boot.imx of=/dev/sdX bs=512 seek=2
17:57 jnettlet_ purch, yeah what rabeeh said
18:00 jnettlet_ purch, and add conv=fsync to it as well to make sure everything gets flushed out
18:02 dv_ purch: use the diagram here: http://elinux.org/RPi_Low-level_peripherals#General_Purpose_Input.2FOutput_.28GPIO.29
18:02 dv_ the TX/RX/ground/5V pins are identical to the rpi
18:07 jnettlet_ except we use 3.3 volts serial
18:08 jnettlet_ but purch knows that because he has been trying to get the right serial connector for a while now
18:16 dv_ right.
18:16 dv_ I use the CP2102 USB-TTL adapter
18:19 jnettlet_ I use the OLPC custom serial adapter version 3. Just because my version 2 is allocated to my trimslice.
18:19 jnettlet_ but the version 2 is way more flexible for connections. usb and serial ports, 3.3 and 5 volt support NULL/straight through
19:36 purch dv_: do you say that my pin setup is wrong?
19:36 purch 3,3V to pin 1
19:36 dv_ uh no idea
19:36 purch =)
19:36 dv_ I just use pin 4 for 3,3v
19:36 purch red power led on the board shines
19:37 dv_ so I have 3v3, gnd, tx, rx directly next to each other
19:37 purch aa, so 3,3V to rpi 5V pin
19:38 dv_ replace 5v with 3,3v in your mind
19:38 purch yes
19:38 purch I followed this one http://imx.solid-run.com/wiki/index.php?title=Carrier-One_Hardware
19:38 dv_ note: I dont know if the other pins are the same as well
19:38 dv_ or if they differ in other ways between carrier1 and rpi
19:38 dv_ yeah I found that one confusing
19:39 purch that wiki page says pin 1 3,3V
19:39 purch I try pin 4 with 3,3V
19:39 rabeeh purch: like rpi
19:40 dv_ purch: I connect it like https://www.modmypi.com/image/cache/data/GPIO/USB-to-TTL-Serial-Cable-Debug-Console-Cable-for-Raspberry-Pi-1-800x800.jpg
19:40 dv_ except that I use pin 4 for 3,3v , not pin 2
19:41 dv_ here a closeup https://www.modmypi.com/image/cache/data/GPIO/USB-to-TTL-Serial-Cable-Debug-Console-Cable-for-Raspberry-Pi-3-800x800.jpg
19:42 rabeeh pin 1 is 3.3v
19:42 rabeeh pin 2 is 5v
19:43 dv_ also on the carrier one?
19:43 dv_ and pin 4 is 5v as well?
19:43 jnettlet_ purch pin1 is 3.3, pin 6 is ground, ping 8 and 10 for tx/rx
19:44 dv_ I thought pins 2 and 4 use 3,3v on the carrier one , instead of 5v like the rpi
19:45 rabeeh dv_: yes and yes
19:45 rabeeh no; it's exactly like rpi
19:45 dv_ that may explain why I sometimes couldnt boot the board
19:45 rabeeh :)
20:53 purch rabeeh: c1 works if you put 5V on pin 2?
20:53 rabeeh yes
20:53 rabeeh you mean 5v input from psu?
20:53 rabeeh if so then yes; but notice that you will be bypassing a fuse and some filters
20:54 rabeeh (i.e. you must really put 5v and not 9v or other)
20:54 purch just for the usb ttl cable voltage
20:57 purch I have nokia 5V 1200mA usb power, should be enough
20:58 rabeeh yes
20:58 rabeeh but this is with c1 and a single usb drive connected to it
20:58 rabeeh i.e. 700mA + 500mA for the drive
20:59 dv_ ah nvm. I didnt connect any 5v or 3v3 pins
21:11 rabeeh purch: you probably need the same
21:11 rabeeh only tx/rx/gnd
21:11 purch let my try
21:20 rabeeh jnettlet_: withregards default bootcmd i wonder why you'v set so many env variables
21:21 rabeeh in my mind it should have been very simple; check uenv.txt in mmc first partition and then that would set the bootcmd
21:22 rabeeh it's nice to have ready-to-use env variables for booting from all sort of different locations; but the default built-in should simply import the uenv.txt from the external storage and that one should decide bootargs and bootcmd
21:24 rabeeh anyone else has thoughts about this?
21:27 rabeeh or should we stick with running boot.scr from first fat / extX partition on the micro SD?
21:32 dv_ thats my thought as well
21:32 dv_ no. uenv.txt is much better
22:15 neofob exit
22:15 neofob exit
22:15 neofob oops, wrong screen, sorry!
22:33 jnettlet_ rabeeh, the env setup is basically just covering all the possible combos of filesystems and u-boot init types. First check is for the uscript, if that doesn't exist then it looks for uEnv.txt, if that doesn't exist then it falls back to trying to load a kernel, and it includes support for both uImage and zImage.
22:34 jnettlet_ If none of that works it tries tftboot, but that is really only for dev work currently.
22:34 rabeeh ok.
22:35 rabeeh for android i'll use boot.scr that internally calls uEnv.txt
22:35 rabeeh any idea if comments can be added to uEnv.txt?
22:36 jnettlet_ no idea. Have never tried.
22:36 jnettlet_ I have re-factored a lot of the initial init process but still have a few more cases to cover.
22:37 jnettlet_ I always check for a boot.scr which overrides everything else.
22:37 rabeeh ok. i'll keep the default bootcmd as-is
22:38 rabeeh lets try it out; i think it good until the case where a uImage is raw on the sd card
22:38 jnettlet_ and of course, part of it also supports loading the dtb so it doesn't have to be appended onto the kernel
22:38 rabeeh but in that case there is no way you can find out where it starts and ends
22:38 rabeeh yes. this is good preparation
23:01 rabeeh jnettlet_: looks like C style comments works in uEnv.txt
23:02 rabeeh i.e /* blah blah */