main hall
==========
1.a/v receiver
2.lcd tv
3.dvd player
4.pc
5.netgear wg ap client mode
6.cable tv tuner
7.cable tv splitter/booster
8.optional dv camcoder
9.optional usb cam
10.optional vcr
11.optional network drive
12.ir tx/receiver
14.wireless keyboard/mouse
15.high VA apc ups
16.optional gaming consoles
17.optional wii
18.optional wifi drones
19.optional gps receivers
20.optional pstn sip converters
21.fax/printer
connection
==========
pc's hdmi,dvd hdmi connected to hdmi in1,in2 of a/v receiver
pc ethernet port connected to router
cable tv tuner/booster connected to pc
ir connected to pc
ups connected to a/v receiver
room1
=====
1.netgear dg router
2.broadband modem
3.pc
4.lcd
5.optional dv camcoder
6.optional usb cam
7.optional vcr
8.optional network drive
9.ir tx/receiver
10.wireless keyboard/mouse
connection
==========
pc hdmi out connected to lcd
bb modem connected to dg
pc connected to dg wireless
dg connected to wg wireless
Sunday, November 21, 2010
android 3d accel
1./dev/pmem_gpu0
2./dev/hw3d
3./dev/hw3dc
4./dev/graphics/fb0
2./dev/hw3d
3./dev/hw3dc
4./dev/graphics/fb0
google tv in short
mythtv plugin to a webbrowser
box needs a cable tv and broadband input
the browser should be able to search on cable and on broadband
the cable should respond to http queries
browser should have to display web and cable content
in the context of tv,web operations should be possible
It falls back to software copybit which is slow as hell, because each line of an image must be copied by the CPU (with memcpy), not even talking about software scaling. That's why copybit and 2D hardware are so important.
box needs a cable tv and broadband input
the browser should be able to search on cable and on broadband
the cable should respond to http queries
browser should have to display web and cable content
in the context of tv,web operations should be possible
It falls back to software copybit which is slow as hell, because each line of an image must be copied by the CPU (with memcpy), not even talking about software scaling. That's why copybit and 2D hardware are so important.
android flingers
1.learning has a wavy pattern
upper crest starts with abstraction
lower crest is concrete
upper(starts with some names)---lower(processes,dlls)--upper(associations)----lower(interface functions)---upper(categories,channels)----lower(context,binary modules)
mediaserver-->omx-->codec
omx buffer is sent with input buffer and offset,and returns output buffer
audio
======
af-->libaudio.so
af-->libaudio.so (alsa)
video
=====
uses EGL as interface
sf-->libhgl.so
sf-->libagl.so
codec
=====
uses OMX as interface
mediaserver-->omxnnn.so
pixelflinger/libhgl for an activity makes all defined surfaces and passes to sf
sf layers and composes them and renders using fb0
libhgl has inputs and outputs,inputs are surface data and surfaces
output is composed buffer
if this output buffer memory can be fed to fb0 the its efficient
but if it cannot be then a memcpy is required before pushing to fb0
eglbuffertypes
native
pushbuffer
fb0
All these are parallel
cpu
gpu multiple layer composer
display renderer hw
codec hw
an activity indirectly calls the eglswapbuffer,this will send data to gpu to compose,till its
completed activity waits,then the composed buffer is given to fb0
copybit.so can export any /dev/nnn,it has to define interface copybit.h,its used for 2d blit
android code assumes gpu/display to be same hw device /dev/graphics/fb0, different ioctls for operations
maybe before writing to gpu,clipping and other operations are done on figures ,a
nd later one by is rendered
upper crest starts with abstraction
lower crest is concrete
upper(starts with some names)---lower(processes,dlls)--upper(associations)----lower(interface functions)---upper(categories,channels)----lower(context,binary modules)
mediaserver-->omx-->codec
omx buffer is sent with input buffer and offset,and returns output buffer
audio
======
af-->libaudio.so
af-->libaudio.so (alsa)
video
=====
uses EGL as interface
sf-->libhgl.so
sf-->libagl.so
codec
=====
uses OMX as interface
mediaserver-->omxnnn.so
pixelflinger/libhgl for an activity makes all defined surfaces and passes to sf
sf layers and composes them and renders using fb0
libhgl has inputs and outputs,inputs are surface data and surfaces
output is composed buffer
if this output buffer memory can be fed to fb0 the its efficient
but if it cannot be then a memcpy is required before pushing to fb0
eglbuffertypes
native
pushbuffer
fb0
All these are parallel
cpu
gpu multiple layer composer
display renderer hw
codec hw
an activity indirectly calls the eglswapbuffer,this will send data to gpu to compose,till its
completed activity waits,then the composed buffer is given to fb0
copybit.so can export any /dev/nnn,it has to define interface copybit.h,its used for 2d blit
android code assumes gpu/display to be same hw device /dev/graphics/fb0, different ioctls for operations
maybe before writing to gpu,clipping and other operations are done on figures ,a
nd later one by is rendered
android surface flinger
1.libhgl.so and libagl.so are loaded into sf.
2.every activity can have one or more surfaces
3.these surfaces may be of different types
4.some of these surfaces will use hw accel some are software
5.before pushing to fb0 surfaces will be composited
6.a combination of libEGL.so and libGLESv2.so will route gl calls to libhgl or libagl.
7.can an activity have hw surface and sw surface parallelly.can these two surfaces be composited together into fb0.
8.maybe both libhgl and libagl will be loaded parallelly and activity will decide if its using libagl or libhgl.
libagl and libhgl calls have one to one correspondence,so partial use of libagl and libhgl is not possible.
8.hardware codec support can be given in android by 2 ways.within a framework or outside
1.within framework --- get the working codec driver in openmax,modify it
take a template code from framework node,change it to work like above
this will work with standard android player.openmax can be adapted to fit into android framework
2.implement a player,within the player code interface with sf and af
if gstreamer already has the codec integrated,use it with a new player engine.
9.in a accelerated codec scenario application pushes a frame desciptor to driver to hardware,descriptor has source buff addr and destination buff addr.on decoding the frame the
frame is pushed to destination buffer.from here application takes it and send to fb0.
10.video acceleration === hardware codec
display acceleration === hadware gl(compositing)
audio accel ===
audio mixing === hadware level
2.every activity can have one or more surfaces
3.these surfaces may be of different types
4.some of these surfaces will use hw accel some are software
5.before pushing to fb0 surfaces will be composited
6.a combination of libEGL.so and libGLESv2.so will route gl calls to libhgl or libagl.
7.can an activity have hw surface and sw surface parallelly.can these two surfaces be composited together into fb0.
8.maybe both libhgl and libagl will be loaded parallelly and activity will decide if its using libagl or libhgl.
libagl and libhgl calls have one to one correspondence,so partial use of libagl and libhgl is not possible.
8.hardware codec support can be given in android by 2 ways.within a framework or outside
1.within framework --- get the working codec driver in openmax,modify it
take a template code from framework node,change it to work like above
this will work with standard android player.openmax can be adapted to fit into android framework
2.implement a player,within the player code interface with sf and af
if gstreamer already has the codec integrated,use it with a new player engine.
9.in a accelerated codec scenario application pushes a frame desciptor to driver to hardware,descriptor has source buff addr and destination buff addr.on decoding the frame the
frame is pushed to destination buffer.from here application takes it and send to fb0.
10.video acceleration === hardware codec
display acceleration === hadware gl(compositing)
audio accel ===
audio mixing === hadware level
Saturday, November 20, 2010
android media framework 2
1.media abstraction
stock linux
============
process
v4l
gstreamer alsa
android
=======
activity
mediaserver
surfaceflinger audioflinger
1.An activity makes an association with surfaceflinger and audioflinger(manifested as an object or handle)
2.An activity knows what datasource to use
3.An activity passes this information to media server
4.mediaserver checks the datasource to determine which engine to use(checks extension .mp4 etc)
each engine is a dll
5.mediaserver passes the sf,af context to engine
6.engine gets the frames from the datasource
7.engine uses the sf and af context to post the frames to corresponding objects
8.sf takes care of underlying hardware
9.af takes care of underlying hardware
10.activity knows whether sf will use hw rendering or sw rendering??? it can also be that activity does not know and sf will try available renderers in a chained manner
11.similar to af
12.sf will have additional dll's for hw driver or software driver
13.af will have additonal dll for sw driver or hw driver
14.where does codecs come into picture??? when taking frames from the datasource,mediaserver will
send this to codec engines(dlls) to get a converted buffer.this buffer is ultimately passed to sf,af.
15.again the codecs will be decided based on hw or sw.also there will be a chain of codecs and
a fallback mechanism
16.every hw codec device will expose an interface to mediaserver (/dev/dsp1 etc)
17.whenever there is an interface in java, ...think of the instances it will represent.for MediaPlayer interface ... represents mp4,3gp,vorbis,midi instances.
18.so activity passes the triplet to mediaplayer instance.Each mediaplayer instance is also
registered with mediaserver.And each instance is declared in dll.
19.mediaserver for each actity can have 5 prongs
player prong
aud codec prong .... this can be multiple
vid codec prong ..... this can be multiple
sf prong ... single
af prong ... single
20.in context of activity sf can have 3 prongs
sw renderer
3d render
2d rendere
hw renderer
2d renderer
3d render
overlay renderer
In addition globally an overlay activity can work within the activity context
overlay renderer will punch through the activity surface to render
different renderer dlls register with sf at boot time or dynamically.
sf maintains a lookup table to see which dll to use for current activity context.
activity passes some parameter to sf to tell sf which renderer to use.
it may be a fallback mechanism also.
an activity will render into the display controllers buffer which is global.
overlay component will have direct access to this controller buffer,so it can render into this
irrespective of activity requirements.
this overlay component will have link to sf and external component.sf gives window info,external gives data
22.sf talks with sw renderer dll,(which directly writes to /dev/graphics/fb0)
opengl renderer dll(which talks to hw card supporting opengl maybe ../dev/graphics/hwaccel0)
so sf first calls opengl hw for manipulating buffers
it gets the modified buffer in return which it sends to fb
or first opengl hw, again open glhw ..which internally writes to fb0
23.a view can have multiple surfaces,each surface can have single glcontext,g1 and emulator doesnt
allow more than one glcontext per view.
fb0 is a surface.
24.an activity does its draw on a surface.this surface might be an abstraction for a hw.like an
accelerator.Once this is over it is handed over to surface flinger which will then write this
buffer to screen.before actually rendering to screen sf has join the menu,status bar etc and then write to hardware.content is copied twice before rendering.
25.android has composing api's and rendering api's
rendering is common for activity,movie playback etc
26.android uses gpu through libGLES_android.so
currently generic gui uses 2d rendering using skia.
generic gui doesnt use opengl.
Only explicit activities use opengl and hence gpus.
stock linux
============
process
v4l
gstreamer alsa
android
=======
activity
mediaserver
surfaceflinger audioflinger
1.An activity makes an association with surfaceflinger and audioflinger(manifested as an object or handle)
2.An activity knows what datasource to use
3.An activity passes this information to media server
4.mediaserver checks the datasource to determine which engine to use(checks extension .mp4 etc)
each engine is a dll
5.mediaserver passes the sf,af context to engine
6.engine gets the frames from the datasource
7.engine uses the sf and af context to post the frames to corresponding objects
8.sf takes care of underlying hardware
9.af takes care of underlying hardware
10.activity knows whether sf will use hw rendering or sw rendering??? it can also be that activity does not know and sf will try available renderers in a chained manner
11.similar to af
12.sf will have additional dll's for hw driver or software driver
13.af will have additonal dll for sw driver or hw driver
14.where does codecs come into picture??? when taking frames from the datasource,mediaserver will
send this to codec engines(dlls) to get a converted buffer.this buffer is ultimately passed to sf,af.
15.again the codecs will be decided based on hw or sw.also there will be a chain of codecs and
a fallback mechanism
16.every hw codec device will expose an interface to mediaserver (/dev/dsp1 etc)
17.whenever there is an interface in java, ...think of the instances it will represent.for MediaPlayer interface ... represents mp4,3gp,vorbis,midi instances.
18.so activity passes the triplet to mediaplayer instance.Each mediaplayer instance is also
registered with mediaserver.And each instance is declared in dll.
19.mediaserver for each actity can have 5 prongs
player prong
aud codec prong .... this can be multiple
vid codec prong ..... this can be multiple
sf prong ... single
af prong ... single
20.in context of activity sf can have 3 prongs
sw renderer
3d render
2d rendere
hw renderer
2d renderer
3d render
overlay renderer
In addition globally an overlay activity can work within the activity context
overlay renderer will punch through the activity surface to render
different renderer dlls register with sf at boot time or dynamically.
sf maintains a lookup table to see which dll to use for current activity context.
activity passes some parameter to sf to tell sf which renderer to use.
it may be a fallback mechanism also.
an activity will render into the display controllers buffer which is global.
overlay component will have direct access to this controller buffer,so it can render into this
irrespective of activity requirements.
this overlay component will have link to sf and external component.sf gives window info,external gives data
22.sf talks with sw renderer dll,(which directly writes to /dev/graphics/fb0)
opengl renderer dll(which talks to hw card supporting opengl maybe ../dev/graphics/hwaccel0)
so sf first calls opengl hw for manipulating buffers
it gets the modified buffer in return which it sends to fb
or first opengl hw, again open glhw ..which internally writes to fb0
23.a view can have multiple surfaces,each surface can have single glcontext,g1 and emulator doesnt
allow more than one glcontext per view.
fb0 is a surface.
24.an activity does its draw on a surface.this surface might be an abstraction for a hw.like an
accelerator.Once this is over it is handed over to surface flinger which will then write this
buffer to screen.before actually rendering to screen sf has join the menu,status bar etc and then write to hardware.content is copied twice before rendering.
25.android has composing api's and rendering api's
rendering is common for activity,movie playback etc
26.android uses gpu through libGLES_android.so
currently generic gui uses 2d rendering using skia.
generic gui doesnt use opengl.
Only explicit activities use opengl and hence gpus.
Friday, November 19, 2010
whats happening in graphics world
1.ATI vs NVIDIA
2.h264,hardware codec
3.http://labs.divx.com/DivX-H264-Decoder-DXVA
Note:
All OS's provide a graphics framework
A thirdparty provider can add their codec support in the framework
A thirdparty provider can add their hardware support in the framework
oem A can add their card with hardware codec-B support
oem B can add support for converting their proprietery format-C to codec-B again using the framework
similarly for A1,A2...
codec-B1,codec-B2 etc
frameworkB component and frameworkA component (frameworkB can be dll,frameworkA maybe .sys)
there will be a general OS framework(opencore),then for video and sound there can be other separate frameworks (video v4l,audio alsa)
opencore,gstreamer are os framework
2.h264,hardware codec
3.http://labs.divx.com/DivX-H264-Decoder-DXVA
Note:
All OS's provide a graphics framework
A thirdparty provider can add their codec support in the framework
A thirdparty provider can add their hardware support in the framework
oem A can add their card with hardware codec-B support
oem B can add support for converting their proprietery format-C to codec-B again using the framework
similarly for A1,A2...
codec-B1,codec-B2 etc
frameworkB component and frameworkA component (frameworkB can be dll,frameworkA maybe .sys)
there will be a general OS framework(opencore),then for video and sound there can be other separate frameworks (video v4l,audio alsa)
opencore,gstreamer are os framework
Subscribe to:
Posts (Atom)