Tuesday, November 16, 2010

using android media framework

1.use java application and opencore
2.render mp4 with h264 and mp3 on sf and af using software accelerators
3.do the same using hw accelerators
4.test it using an incoming mp4 stream over network
5.where is the a and v demuxing happening
6.how many components are there in the pipeline
7.which component redirects the stream to sf and af
8.how are the controls interfaced with the pipeline

model
there is a loop,
it processes command packets first
then the data packets
data is both audio and video ...one after other
so a complete cycle .. process command,process video,process audio

android media framework 1

1.mfw has a graph
2.graph has nodes
3.nodes are source or sink they are encoder,decoder,parser,modifier
4.source or sink are software or hardware
5.frame comes from source and goes into sink
6.frames are put into command queue of each node
7.each hw/sw node has upper edge OMX interface
8.a node is a .so
9.init is done using .cfg files
10. .so contains cfg i/f, omx wrapper,
11. every .so contains player engine, player driver invoke this engine

12.fw searches ./system/etc/01_Vendor_ti_omx.cfg for hw codec
not found uses SW codecs from PVOMX components (picked up by ./system/etc/pvplayer.cfg)
You can disable Hardware acceleration by editing this file: platform/vendor/ti/zoom2/BoardConfig.mk
take a look at the supported OMX roles (tComponentName): platform/hardware/ti/omx/system/src/openmax_il/omx_core/src/OMX_core.c

13.hardware codec === hardware accelerators
14.they are different from display accelerators

android media framework

1.
level 1 -- MIO source/sink
level 0 -- OMX source/sink (decoder)
level -1(sink) -- surfaceflinger,audioflinger
level -1(source) -- raw hw,encoded stream,decoded stream,hw
2.MIO is like switch box
3.playback is determined by global clock and stream packet timestamps,at any momemt the timestamp
has to be in sync with relative clock value