Monday, November 22, 2010

android pvplayer mp4 3gp stack

/system/bin/xxx.so
AndroidAudioOutput::,AndroidSurfaceOutput::
1.Java
======
java ------ MediaPlayer.java
jni ------ libmedia_jni.so(wrapper)
native ------ libmedia.so
native ------ libui.so
native ------ libhardware.so

2.Player engine
===============
libpvplayer.so,libopencoreplayer.so

3.mediaserver
native -- libmediaplayerservice.so
/system/bin/(frameworks/base/media/mediaserver/)

4.framework
stagefright,opencore,gstreamer

3.Parser Node
==============
libpv.so ---- parse source

Video
#################

4.Decoder Node
===============

directory ---- external/opencore/codecs_v2/omx/omx_mycodec
test app ---- external/opencore/codecs_v2/omx/omx_testapp
omx_nnn.so
omx_mmm.cfg
ti decoder -- libOMX_Core.so
info hw codec --- codecsv2/omx/omx_common/src/pv_omxmastercore.cpp

5.MIO node
===========
AndroidVideoOutput::
OSCL_xxx
libopencorehw.so

6.Surfaceflinger
================
libsurfaceflinger.so

7.video client
==============
libagl.so

Audio
###############

8.Decoder Node
===============
libvorbisdec.so
9.MIO node
===========
AndroidAudioOutput::
OSCL_xxx

10.Audioflinger
================
libaudioflinger.so

11.audio client
==============
libpv.so/libaudio.so implements hardware interface AudioHardwareInterface
AudioHardwareInterface base class is in Audioflinger
"AUDIO_SERVICE"



Accelerated Video Codec
Accelerated Video Hardware
Combined Acceleration codec+video
#################################
coming

opencore
node-if node-if node-if
omx-if mio-if

Atom + Nvidea Ion Myth

any doubt on libagl and libhgl see directfb documents
directfb uses ioctls to distinuish hw or sw rendering

android framework components

1.OMX components are decoders and others
2.MIO components are source/sink and others

A module can have multiple interfaces.A graph node is a module.It can have control interface.
At init time based on the queries on the control interface the other interfaces are linked.
Interfaces even exchange pool allocaters.for eg a pool can be created at MIO layer,its fd passes to
OMX layer and its used at this layer for allocations

physical memory can be allocated at MIO and passes to frame source to fill in.

3.node --- parser
node --- codec ----- OMX
node --- sink/source --- MIO
each node will have cfg files

4.currently pvcore expects
MIO IN --> OMX encoder --> PV recorder
PV player ---> OMX decoder --> MIO out
common MIO out is libopencorehw.so it talks to sf

5.on 3530 how arm commn with dsp
using dsp-bridge
special compiler used to gen binaries one for dsp one for arm
put them under directories where each will be picked by dsp or arm

general display

1.linux
x client
xserver
usermode lib
kernel driver

2.framebuffer hardware
its a complete hardware,takes block of memory and converts to pixels

framebuffer layers
opengl feeds instructions to gpu layer
gpu layer combines this with overlay devices like cursor,camera preview
gpu layer
videomemory layer
it fills the back buffer ,front buffer
display layer
dumps the picture

3.if some gl operations cannot be done by hw then it should fallback using sw .

4.an activity can give a set of gl commands and output memory block,gpu will render into this
memory and ,cpu will copy this memory to video ram for display

an activity can give set of gl commands and mapped vram memory to gpu,gpu will render into this and it will be directly displayed

software rendering can happen in 1 way,where it does all gpu operation and renders into
mapped vram.lot of temporary buffers required.lot of computations as well.

conclusion /dev/fb0 can be both accelerated and non accelerated device.

/dev/fb0 software renderer
/dev/devmem device is needed for hardware rendering

this abstracts device memory and this memory is mapped to user space directly
/dev/fb0 has an additional interface for hardware rendering. it can be disabled or enabled.

frame memory is mapped to activity whether it is hw/sw rendering.in hw rendering enclosing operations are done in hw,otherwirse in sw