1.a diagramatic representation of activities for next 1 hr with goal
the diagram should have each component numbered
2.placeholder programming,abstract progra..
when the idea is not concrete building a framework
it would involve functions and statics and visitors
a use case is when rearranging an existing code anticipating changes to incorporate
in next version.the existing code is supposed to fit in the current framework
3.refactoring code may be crude,rigid
issues, if layered code then
will have to shift one function from one layer to anoter,this means moving all dependendent functions in other layer.
this will break layer opaqueness, lower layer becomes visible in upper.
mostly happens when loop nuller(progressers) one is in top layer antother is in bottom
4.another way for a quick solution to above problem is ,
reduntant state and data structures, it will mirror the state in either layers and hence
can be used explicitly in a layer
sometimes to get a quick work around, we may have to shift processing of 1 layer to another
this would involve movement of 1 .c file and only implementing guest layer functions in it
Tuesday, November 30, 2010
Monday, November 29, 2010
insight:
1.adaptive hardware
with powerful web os's, it should be possible to create wearable ose's.
ie at command the working os should upload to web and download to device.
the os has to be sleek,rootfs independent of kernel
powerful boot time http support
overlay http task between boot and normal running.
call it amoeba
the hardware has to act like a mobo at one time , and as a modem,disk,display at other
2.solar robots
flying solar accessor
according to suns movement this bot is supposed to turn or fly
it should carry an adapter with it to store charges
if packet power is present same technology can be used to provide it as a accesspoint with maximum efficiency
3.
with powerful web os's, it should be possible to create wearable ose's.
ie at command the working os should upload to web and download to device.
the os has to be sleek,rootfs independent of kernel
powerful boot time http support
overlay http task between boot and normal running.
call it amoeba
the hardware has to act like a mobo at one time , and as a modem,disk,display at other
2.solar robots
flying solar accessor
according to suns movement this bot is supposed to turn or fly
it should carry an adapter with it to store charges
if packet power is present same technology can be used to provide it as a accesspoint with maximum efficiency
3.
kernel module
1. local dir> on ubuntu make -C kdir M=$(pwd) ... works
local dir> on ubuntu make -C kdir M=$(pwd) modules puts in standard directory
...M=$(shell pwd) .. doesnt work
---M=$(PWD) ... may work
all the above in makefile
kdir is separate
local dir> on ubuntu make -C kdir M=$(pwd) modules puts in standard directory
...M=$(shell pwd) .. doesnt work
---M=$(PWD) ... may work
all the above in makefile
kdir is separate
Sunday, November 28, 2010
measured development
1.caliberation of the process
2.perday objectives,finer split
3.requirements perday
4.functional block,bit block views
2.perday objectives,finer split
3.requirements perday
4.functional block,bit block views
windows vista ndis 6 driver and mobile stack
1.use preexisting phy code
2.need kernel module .. non pnp
3.need a nic ndis 6 driver
4.can export any function from driver/module using
__declspec(dllexport),__declspec(dllimport)
5.for importing function the .lib of exporting module
should be statically linked
6.kernel dll is different from above,
it gets loaded in the context of first call of a driver,
check when its driverentry is called to test
2.need kernel module .. non pnp
3.need a nic ndis 6 driver
4.can export any function from driver/module using
__declspec(dllexport),__declspec(dllimport)
5.for importing function the .lib of exporting module
should be statically linked
6.kernel dll is different from above,
it gets loaded in the context of first call of a driver,
check when its driverentry is called to test
star type data processing
1.atleast 3 threads contribute
2.one thread atomic write only
3.one thread check continuity
check single condition
update states
process
4.one thread
check continuity
check single
update
process
goto continuity
global continuity
check single
process
update
check another
...
5.using split names in c code
using atleast 15-20 char name would give a pleasent
look to code
But, remember every time that variable is used
u need to type 15 chars
2.one thread atomic write only
3.one thread check continuity
check single condition
update states
process
4.one thread
check continuity
check single
update
process
goto continuity
global continuity
check single
process
update
check another
...
5.using split names in c code
using atleast 15-20 char name would give a pleasent
look to code
But, remember every time that variable is used
u need to type 15 chars
Saturday, November 27, 2010
c functions
1.parameter validatio
2.entry level state validattion
3.incoming outgoing traces
4.passing last states and updations to global debug access
5.aggressive benign checks
6.use of macros,inlines
7.use check and process pardigm.split check and processing blocks.effictive use of while
2.entry level state validattion
3.incoming outgoing traces
4.passing last states and updations to global debug access
5.aggressive benign checks
6.use of macros,inlines
7.use check and process pardigm.split check and processing blocks.effictive use of while
android media fw
1.use pvlogger to get a complete trace to start
Friday, November 26, 2010
incremental development
1.2 stages
a. functional block
b. bit block
functional block involves
prototype,code modifications,dependency changes,additions,deletions together
--->mostly compilable and runnable
bit block
just prototype ...etc
--->just compilable
a. functional block
b. bit block
functional block involves
prototype,code modifications,dependency changes,additions,deletions together
--->mostly compilable and runnable
bit block
just prototype ...etc
--->just compilable
Wednesday, November 24, 2010
android audio flinger
1.http://kzjblog.appspot.com/2010/03/6/Android-Audio-System-%281%29.html
2.copybit is a hw 2d hal,libhgl is hw 3d hal???
2.copybit is a hw 2d hal,libhgl is hw 3d hal???
Tuesday, November 23, 2010
android pmem and ashmem
1.pmem -- allocation code in kernel but used at usermode
ensures that memory is returned as
PAGE_SIZE,offset,len ... PAGE_SIZE,0,len, PAGE_SIZE,0,len
2.ashmem
named memory block that is shared between processes and kernel is allowed to free. ker is not allowed to free standard shared memory.
http://cs736-android.pbworks.com/w/page/5834465/ASHMEM
surfaceflinger has a total of 8mb heap,it shares this heap with all processes for their surfaces.
when a process requests space,it allocates from this heap,but returns a pointer to the allocated chunk.once process is out of focus pointer is nulled
This is not for surface data ...but for surface control.control contains pointer to 2 data buffers
Each layer has a corresponding surface
layerbuffer doesnt have one
each surface has 2 buffers
these 2 can be from ashmem or pmem
a canvas can be over layer
a canvas can be associated with multile bitmaps
it seems that surfaceflinger allocater uses pmem and/or normal-memory.some activities use pmem heap and some use normal heap.surface flinger passes this buffer down in two different ioctls to fb driver.
ensures that memory is returned as
PAGE_SIZE,offset,len ... PAGE_SIZE,0,len, PAGE_SIZE,0,len
2.ashmem
named memory block that is shared between processes and kernel is allowed to free. ker is not allowed to free standard shared memory.
http://cs736-android.pbworks.com/w/page/5834465/ASHMEM
surfaceflinger has a total of 8mb heap,it shares this heap with all processes for their surfaces.
when a process requests space,it allocates from this heap,but returns a pointer to the allocated chunk.once process is out of focus pointer is nulled
This is not for surface data ...but for surface control.control contains pointer to 2 data buffers
Each layer has a corresponding surface
layerbuffer doesnt have one
each surface has 2 buffers
these 2 can be from ashmem or pmem
a canvas can be over layer
a canvas can be associated with multile bitmaps
it seems that surfaceflinger allocater uses pmem and/or normal-memory.some activities use pmem heap and some use normal heap.surface flinger passes this buffer down in two different ioctls to fb driver.
pattern matching algos and artificial brain
1.efficient pattern matching algos will lead to artificial intel
2.pattern matching leads to precise quantization info and making of tokens
3.so obtained tokens can be compared with token database
4.from a set of tokens, it can create context database dynamically
5.transactions involves tokens,contexts and outputs
6.a processing loop can generate permutation of transactions
7.outputs can be labeled desirable,deffered,expel
9.based on the above states,transactions can be chained
An implementation
a game involving transaction doers and decision makers.Within a set of token database and set of
rules for desired,deferred,expel states.
2.pattern matching leads to precise quantization info and making of tokens
3.so obtained tokens can be compared with token database
4.from a set of tokens, it can create context database dynamically
5.transactions involves tokens,contexts and outputs
6.a processing loop can generate permutation of transactions
7.outputs can be labeled desirable,deffered,expel
9.based on the above states,transactions can be chained
An implementation
a game involving transaction doers and decision makers.Within a set of token database and set of
rules for desired,deferred,expel states.
android top down
1.activity ---- in process 1
2.windowserver ---- in process 2
4.mediaserver --- in process 4
5.surfaceflinger -- in process 5
6.audioflinger ---in process 6
7.rild -- in process 7
2.windowserver ---- in process 2
4.mediaserver --- in process 4
5.surfaceflinger -- in process 5
6.audioflinger ---in process 6
7.rild -- in process 7
Monday, November 22, 2010
android pvplayer mp4 3gp stack
/system/bin/xxx.so
AndroidAudioOutput::,AndroidSurfaceOutput::
1.Java
======
java ------ MediaPlayer.java
jni ------ libmedia_jni.so(wrapper)
native ------ libmedia.so
native ------ libui.so
native ------ libhardware.so
2.Player engine
===============
libpvplayer.so,libopencoreplayer.so
3.mediaserver
native -- libmediaplayerservice.so
/system/bin/(frameworks/base/media/mediaserver/)
4.framework
stagefright,opencore,gstreamer
3.Parser Node
==============
libpv.so ---- parse source
Video
#################
4.Decoder Node
===============
directory ---- external/opencore/codecs_v2/omx/omx_mycodec
test app ---- external/opencore/codecs_v2/omx/omx_testapp
omx_nnn.so
omx_mmm.cfg
ti decoder -- libOMX_Core.so
info hw codec --- codecsv2/omx/omx_common/src/pv_omxmastercore.cpp
5.MIO node
===========
AndroidVideoOutput::
OSCL_xxx
libopencorehw.so
6.Surfaceflinger
================
libsurfaceflinger.so
7.video client
==============
libagl.so
Audio
###############
8.Decoder Node
===============
libvorbisdec.so
9.MIO node
===========
AndroidAudioOutput::
OSCL_xxx
10.Audioflinger
================
libaudioflinger.so
11.audio client
==============
libpv.so/libaudio.so implements hardware interface AudioHardwareInterface
AudioHardwareInterface base class is in Audioflinger
"AUDIO_SERVICE"
Accelerated Video Codec
Accelerated Video Hardware
Combined Acceleration codec+video
#################################
coming
opencore
node-if node-if node-if
omx-if mio-if
AndroidAudioOutput::,AndroidSurfaceOutput::
1.Java
======
java ------ MediaPlayer.java
jni ------ libmedia_jni.so(wrapper)
native ------ libmedia.so
native ------ libui.so
native ------ libhardware.so
2.Player engine
===============
libpvplayer.so,libopencoreplayer.so
3.mediaserver
native -- libmediaplayerservice.so
/system/bin/(frameworks/base/media/mediaserver/)
4.framework
stagefright,opencore,gstreamer
3.Parser Node
==============
libpv.so ---- parse source
Video
#################
4.Decoder Node
===============
directory ---- external/opencore/codecs_v2/omx/omx_mycodec
test app ---- external/opencore/codecs_v2/omx/omx_testapp
omx_nnn.so
omx_mmm.cfg
ti decoder -- libOMX_Core.so
info hw codec --- codecsv2/omx/omx_common/src/pv_omxmastercore.cpp
5.MIO node
===========
AndroidVideoOutput::
OSCL_xxx
libopencorehw.so
6.Surfaceflinger
================
libsurfaceflinger.so
7.video client
==============
libagl.so
Audio
###############
8.Decoder Node
===============
libvorbisdec.so
9.MIO node
===========
AndroidAudioOutput::
OSCL_xxx
10.Audioflinger
================
libaudioflinger.so
11.audio client
==============
libpv.so/libaudio.so implements hardware interface AudioHardwareInterface
AudioHardwareInterface base class is in Audioflinger
"AUDIO_SERVICE"
Accelerated Video Codec
Accelerated Video Hardware
Combined Acceleration codec+video
#################################
coming
opencore
node-if node-if node-if
omx-if mio-if
Atom + Nvidea Ion Myth
any doubt on libagl and libhgl see directfb documents
directfb uses ioctls to distinuish hw or sw rendering
directfb uses ioctls to distinuish hw or sw rendering
android framework components
1.OMX components are decoders and others
2.MIO components are source/sink and others
A module can have multiple interfaces.A graph node is a module.It can have control interface.
At init time based on the queries on the control interface the other interfaces are linked.
Interfaces even exchange pool allocaters.for eg a pool can be created at MIO layer,its fd passes to
OMX layer and its used at this layer for allocations
physical memory can be allocated at MIO and passes to frame source to fill in.
3.node --- parser
node --- codec ----- OMX
node --- sink/source --- MIO
each node will have cfg files
4.currently pvcore expects
MIO IN --> OMX encoder --> PV recorder
PV player ---> OMX decoder --> MIO out
common MIO out is libopencorehw.so it talks to sf
5.on 3530 how arm commn with dsp
using dsp-bridge
special compiler used to gen binaries one for dsp one for arm
put them under directories where each will be picked by dsp or arm
2.MIO components are source/sink and others
A module can have multiple interfaces.A graph node is a module.It can have control interface.
At init time based on the queries on the control interface the other interfaces are linked.
Interfaces even exchange pool allocaters.for eg a pool can be created at MIO layer,its fd passes to
OMX layer and its used at this layer for allocations
physical memory can be allocated at MIO and passes to frame source to fill in.
3.node --- parser
node --- codec ----- OMX
node --- sink/source --- MIO
each node will have cfg files
4.currently pvcore expects
MIO IN --> OMX encoder --> PV recorder
PV player ---> OMX decoder --> MIO out
common MIO out is libopencorehw.so it talks to sf
5.on 3530 how arm commn with dsp
using dsp-bridge
special compiler used to gen binaries one for dsp one for arm
put them under directories where each will be picked by dsp or arm
general display
1.linux
x client
xserver
usermode lib
kernel driver
2.framebuffer hardware
its a complete hardware,takes block of memory and converts to pixels
framebuffer layers
opengl feeds instructions to gpu layer
gpu layer combines this with overlay devices like cursor,camera preview
gpu layer
videomemory layer
it fills the back buffer ,front buffer
display layer
dumps the picture
3.if some gl operations cannot be done by hw then it should fallback using sw .
4.an activity can give a set of gl commands and output memory block,gpu will render into this
memory and ,cpu will copy this memory to video ram for display
an activity can give set of gl commands and mapped vram memory to gpu,gpu will render into this and it will be directly displayed
software rendering can happen in 1 way,where it does all gpu operation and renders into
mapped vram.lot of temporary buffers required.lot of computations as well.
conclusion /dev/fb0 can be both accelerated and non accelerated device.
/dev/fb0 software renderer
/dev/devmem device is needed for hardware rendering
this abstracts device memory and this memory is mapped to user space directly
/dev/fb0 has an additional interface for hardware rendering. it can be disabled or enabled.
frame memory is mapped to activity whether it is hw/sw rendering.in hw rendering enclosing operations are done in hw,otherwirse in sw
x client
xserver
usermode lib
kernel driver
2.framebuffer hardware
its a complete hardware,takes block of memory and converts to pixels
framebuffer layers
opengl feeds instructions to gpu layer
gpu layer combines this with overlay devices like cursor,camera preview
gpu layer
videomemory layer
it fills the back buffer ,front buffer
display layer
dumps the picture
3.if some gl operations cannot be done by hw then it should fallback using sw .
4.an activity can give a set of gl commands and output memory block,gpu will render into this
memory and ,cpu will copy this memory to video ram for display
an activity can give set of gl commands and mapped vram memory to gpu,gpu will render into this and it will be directly displayed
software rendering can happen in 1 way,where it does all gpu operation and renders into
mapped vram.lot of temporary buffers required.lot of computations as well.
conclusion /dev/fb0 can be both accelerated and non accelerated device.
/dev/fb0 software renderer
/dev/devmem device is needed for hardware rendering
this abstracts device memory and this memory is mapped to user space directly
/dev/fb0 has an additional interface for hardware rendering. it can be disabled or enabled.
frame memory is mapped to activity whether it is hw/sw rendering.in hw rendering enclosing operations are done in hw,otherwirse in sw
Sunday, November 21, 2010
home network
main hall
==========
1.a/v receiver
2.lcd tv
3.dvd player
4.pc
5.netgear wg ap client mode
6.cable tv tuner
7.cable tv splitter/booster
8.optional dv camcoder
9.optional usb cam
10.optional vcr
11.optional network drive
12.ir tx/receiver
14.wireless keyboard/mouse
15.high VA apc ups
16.optional gaming consoles
17.optional wii
18.optional wifi drones
19.optional gps receivers
20.optional pstn sip converters
21.fax/printer
connection
==========
pc's hdmi,dvd hdmi connected to hdmi in1,in2 of a/v receiver
pc ethernet port connected to router
cable tv tuner/booster connected to pc
ir connected to pc
ups connected to a/v receiver
room1
=====
1.netgear dg router
2.broadband modem
3.pc
4.lcd
5.optional dv camcoder
6.optional usb cam
7.optional vcr
8.optional network drive
9.ir tx/receiver
10.wireless keyboard/mouse
connection
==========
pc hdmi out connected to lcd
bb modem connected to dg
pc connected to dg wireless
dg connected to wg wireless
==========
1.a/v receiver
2.lcd tv
3.dvd player
4.pc
5.netgear wg ap client mode
6.cable tv tuner
7.cable tv splitter/booster
8.optional dv camcoder
9.optional usb cam
10.optional vcr
11.optional network drive
12.ir tx/receiver
14.wireless keyboard/mouse
15.high VA apc ups
16.optional gaming consoles
17.optional wii
18.optional wifi drones
19.optional gps receivers
20.optional pstn sip converters
21.fax/printer
connection
==========
pc's hdmi,dvd hdmi connected to hdmi in1,in2 of a/v receiver
pc ethernet port connected to router
cable tv tuner/booster connected to pc
ir connected to pc
ups connected to a/v receiver
room1
=====
1.netgear dg router
2.broadband modem
3.pc
4.lcd
5.optional dv camcoder
6.optional usb cam
7.optional vcr
8.optional network drive
9.ir tx/receiver
10.wireless keyboard/mouse
connection
==========
pc hdmi out connected to lcd
bb modem connected to dg
pc connected to dg wireless
dg connected to wg wireless
android 3d accel
1./dev/pmem_gpu0
2./dev/hw3d
3./dev/hw3dc
4./dev/graphics/fb0
2./dev/hw3d
3./dev/hw3dc
4./dev/graphics/fb0
google tv in short
mythtv plugin to a webbrowser
box needs a cable tv and broadband input
the browser should be able to search on cable and on broadband
the cable should respond to http queries
browser should have to display web and cable content
in the context of tv,web operations should be possible
It falls back to software copybit which is slow as hell, because each line of an image must be copied by the CPU (with memcpy), not even talking about software scaling. That's why copybit and 2D hardware are so important.
box needs a cable tv and broadband input
the browser should be able to search on cable and on broadband
the cable should respond to http queries
browser should have to display web and cable content
in the context of tv,web operations should be possible
It falls back to software copybit which is slow as hell, because each line of an image must be copied by the CPU (with memcpy), not even talking about software scaling. That's why copybit and 2D hardware are so important.
android flingers
1.learning has a wavy pattern
upper crest starts with abstraction
lower crest is concrete
upper(starts with some names)---lower(processes,dlls)--upper(associations)----lower(interface functions)---upper(categories,channels)----lower(context,binary modules)
mediaserver-->omx-->codec
omx buffer is sent with input buffer and offset,and returns output buffer
audio
======
af-->libaudio.so
af-->libaudio.so (alsa)
video
=====
uses EGL as interface
sf-->libhgl.so
sf-->libagl.so
codec
=====
uses OMX as interface
mediaserver-->omxnnn.so
pixelflinger/libhgl for an activity makes all defined surfaces and passes to sf
sf layers and composes them and renders using fb0
libhgl has inputs and outputs,inputs are surface data and surfaces
output is composed buffer
if this output buffer memory can be fed to fb0 the its efficient
but if it cannot be then a memcpy is required before pushing to fb0
eglbuffertypes
native
pushbuffer
fb0
All these are parallel
cpu
gpu multiple layer composer
display renderer hw
codec hw
an activity indirectly calls the eglswapbuffer,this will send data to gpu to compose,till its
completed activity waits,then the composed buffer is given to fb0
copybit.so can export any /dev/nnn,it has to define interface copybit.h,its used for 2d blit
android code assumes gpu/display to be same hw device /dev/graphics/fb0, different ioctls for operations
maybe before writing to gpu,clipping and other operations are done on figures ,a
nd later one by is rendered
upper crest starts with abstraction
lower crest is concrete
upper(starts with some names)---lower(processes,dlls)--upper(associations)----lower(interface functions)---upper(categories,channels)----lower(context,binary modules)
mediaserver-->omx-->codec
omx buffer is sent with input buffer and offset,and returns output buffer
audio
======
af-->libaudio.so
af-->libaudio.so (alsa)
video
=====
uses EGL as interface
sf-->libhgl.so
sf-->libagl.so
codec
=====
uses OMX as interface
mediaserver-->omxnnn.so
pixelflinger/libhgl for an activity makes all defined surfaces and passes to sf
sf layers and composes them and renders using fb0
libhgl has inputs and outputs,inputs are surface data and surfaces
output is composed buffer
if this output buffer memory can be fed to fb0 the its efficient
but if it cannot be then a memcpy is required before pushing to fb0
eglbuffertypes
native
pushbuffer
fb0
All these are parallel
cpu
gpu multiple layer composer
display renderer hw
codec hw
an activity indirectly calls the eglswapbuffer,this will send data to gpu to compose,till its
completed activity waits,then the composed buffer is given to fb0
copybit.so can export any /dev/nnn,it has to define interface copybit.h,its used for 2d blit
android code assumes gpu/display to be same hw device /dev/graphics/fb0, different ioctls for operations
maybe before writing to gpu,clipping and other operations are done on figures ,a
nd later one by is rendered
android surface flinger
1.libhgl.so and libagl.so are loaded into sf.
2.every activity can have one or more surfaces
3.these surfaces may be of different types
4.some of these surfaces will use hw accel some are software
5.before pushing to fb0 surfaces will be composited
6.a combination of libEGL.so and libGLESv2.so will route gl calls to libhgl or libagl.
7.can an activity have hw surface and sw surface parallelly.can these two surfaces be composited together into fb0.
8.maybe both libhgl and libagl will be loaded parallelly and activity will decide if its using libagl or libhgl.
libagl and libhgl calls have one to one correspondence,so partial use of libagl and libhgl is not possible.
8.hardware codec support can be given in android by 2 ways.within a framework or outside
1.within framework --- get the working codec driver in openmax,modify it
take a template code from framework node,change it to work like above
this will work with standard android player.openmax can be adapted to fit into android framework
2.implement a player,within the player code interface with sf and af
if gstreamer already has the codec integrated,use it with a new player engine.
9.in a accelerated codec scenario application pushes a frame desciptor to driver to hardware,descriptor has source buff addr and destination buff addr.on decoding the frame the
frame is pushed to destination buffer.from here application takes it and send to fb0.
10.video acceleration === hardware codec
display acceleration === hadware gl(compositing)
audio accel ===
audio mixing === hadware level
2.every activity can have one or more surfaces
3.these surfaces may be of different types
4.some of these surfaces will use hw accel some are software
5.before pushing to fb0 surfaces will be composited
6.a combination of libEGL.so and libGLESv2.so will route gl calls to libhgl or libagl.
7.can an activity have hw surface and sw surface parallelly.can these two surfaces be composited together into fb0.
8.maybe both libhgl and libagl will be loaded parallelly and activity will decide if its using libagl or libhgl.
libagl and libhgl calls have one to one correspondence,so partial use of libagl and libhgl is not possible.
8.hardware codec support can be given in android by 2 ways.within a framework or outside
1.within framework --- get the working codec driver in openmax,modify it
take a template code from framework node,change it to work like above
this will work with standard android player.openmax can be adapted to fit into android framework
2.implement a player,within the player code interface with sf and af
if gstreamer already has the codec integrated,use it with a new player engine.
9.in a accelerated codec scenario application pushes a frame desciptor to driver to hardware,descriptor has source buff addr and destination buff addr.on decoding the frame the
frame is pushed to destination buffer.from here application takes it and send to fb0.
10.video acceleration === hardware codec
display acceleration === hadware gl(compositing)
audio accel ===
audio mixing === hadware level
Saturday, November 20, 2010
android media framework 2
1.media abstraction
stock linux
============
process
v4l
gstreamer alsa
android
=======
activity
mediaserver
surfaceflinger audioflinger
1.An activity makes an association with surfaceflinger and audioflinger(manifested as an object or handle)
2.An activity knows what datasource to use
3.An activity passes this information to media server
4.mediaserver checks the datasource to determine which engine to use(checks extension .mp4 etc)
each engine is a dll
5.mediaserver passes the sf,af context to engine
6.engine gets the frames from the datasource
7.engine uses the sf and af context to post the frames to corresponding objects
8.sf takes care of underlying hardware
9.af takes care of underlying hardware
10.activity knows whether sf will use hw rendering or sw rendering??? it can also be that activity does not know and sf will try available renderers in a chained manner
11.similar to af
12.sf will have additional dll's for hw driver or software driver
13.af will have additonal dll for sw driver or hw driver
14.where does codecs come into picture??? when taking frames from the datasource,mediaserver will
send this to codec engines(dlls) to get a converted buffer.this buffer is ultimately passed to sf,af.
15.again the codecs will be decided based on hw or sw.also there will be a chain of codecs and
a fallback mechanism
16.every hw codec device will expose an interface to mediaserver (/dev/dsp1 etc)
17.whenever there is an interface in java, ...think of the instances it will represent.for MediaPlayer interface ... represents mp4,3gp,vorbis,midi instances.
18.so activity passes the triplet to mediaplayer instance.Each mediaplayer instance is also
registered with mediaserver.And each instance is declared in dll.
19.mediaserver for each actity can have 5 prongs
player prong
aud codec prong .... this can be multiple
vid codec prong ..... this can be multiple
sf prong ... single
af prong ... single
20.in context of activity sf can have 3 prongs
sw renderer
3d render
2d rendere
hw renderer
2d renderer
3d render
overlay renderer
In addition globally an overlay activity can work within the activity context
overlay renderer will punch through the activity surface to render
different renderer dlls register with sf at boot time or dynamically.
sf maintains a lookup table to see which dll to use for current activity context.
activity passes some parameter to sf to tell sf which renderer to use.
it may be a fallback mechanism also.
an activity will render into the display controllers buffer which is global.
overlay component will have direct access to this controller buffer,so it can render into this
irrespective of activity requirements.
this overlay component will have link to sf and external component.sf gives window info,external gives data
22.sf talks with sw renderer dll,(which directly writes to /dev/graphics/fb0)
opengl renderer dll(which talks to hw card supporting opengl maybe ../dev/graphics/hwaccel0)
so sf first calls opengl hw for manipulating buffers
it gets the modified buffer in return which it sends to fb
or first opengl hw, again open glhw ..which internally writes to fb0
23.a view can have multiple surfaces,each surface can have single glcontext,g1 and emulator doesnt
allow more than one glcontext per view.
fb0 is a surface.
24.an activity does its draw on a surface.this surface might be an abstraction for a hw.like an
accelerator.Once this is over it is handed over to surface flinger which will then write this
buffer to screen.before actually rendering to screen sf has join the menu,status bar etc and then write to hardware.content is copied twice before rendering.
25.android has composing api's and rendering api's
rendering is common for activity,movie playback etc
26.android uses gpu through libGLES_android.so
currently generic gui uses 2d rendering using skia.
generic gui doesnt use opengl.
Only explicit activities use opengl and hence gpus.
stock linux
============
process
v4l
gstreamer alsa
android
=======
activity
mediaserver
surfaceflinger audioflinger
1.An activity makes an association with surfaceflinger and audioflinger(manifested as an object or handle)
2.An activity knows what datasource to use
3.An activity passes this information to media server
4.mediaserver checks the datasource to determine which engine to use(checks extension .mp4 etc)
each engine is a dll
5.mediaserver passes the sf,af context to engine
6.engine gets the frames from the datasource
7.engine uses the sf and af context to post the frames to corresponding objects
8.sf takes care of underlying hardware
9.af takes care of underlying hardware
10.activity knows whether sf will use hw rendering or sw rendering??? it can also be that activity does not know and sf will try available renderers in a chained manner
11.similar to af
12.sf will have additional dll's for hw driver or software driver
13.af will have additonal dll for sw driver or hw driver
14.where does codecs come into picture??? when taking frames from the datasource,mediaserver will
send this to codec engines(dlls) to get a converted buffer.this buffer is ultimately passed to sf,af.
15.again the codecs will be decided based on hw or sw.also there will be a chain of codecs and
a fallback mechanism
16.every hw codec device will expose an interface to mediaserver (/dev/dsp1 etc)
17.whenever there is an interface in java, ...think of the instances it will represent.for MediaPlayer interface ... represents mp4,3gp,vorbis,midi instances.
18.so activity passes the triplet to mediaplayer instance.Each mediaplayer instance is also
registered with mediaserver.And each instance is declared in dll.
19.mediaserver for each actity can have 5 prongs
player prong
aud codec prong .... this can be multiple
vid codec prong ..... this can be multiple
sf prong ... single
af prong ... single
20.in context of activity sf can have 3 prongs
sw renderer
3d render
2d rendere
hw renderer
2d renderer
3d render
overlay renderer
In addition globally an overlay activity can work within the activity context
overlay renderer will punch through the activity surface to render
different renderer dlls register with sf at boot time or dynamically.
sf maintains a lookup table to see which dll to use for current activity context.
activity passes some parameter to sf to tell sf which renderer to use.
it may be a fallback mechanism also.
an activity will render into the display controllers buffer which is global.
overlay component will have direct access to this controller buffer,so it can render into this
irrespective of activity requirements.
this overlay component will have link to sf and external component.sf gives window info,external gives data
22.sf talks with sw renderer dll,(which directly writes to /dev/graphics/fb0)
opengl renderer dll(which talks to hw card supporting opengl maybe ../dev/graphics/hwaccel0)
so sf first calls opengl hw for manipulating buffers
it gets the modified buffer in return which it sends to fb
or first opengl hw, again open glhw ..which internally writes to fb0
23.a view can have multiple surfaces,each surface can have single glcontext,g1 and emulator doesnt
allow more than one glcontext per view.
fb0 is a surface.
24.an activity does its draw on a surface.this surface might be an abstraction for a hw.like an
accelerator.Once this is over it is handed over to surface flinger which will then write this
buffer to screen.before actually rendering to screen sf has join the menu,status bar etc and then write to hardware.content is copied twice before rendering.
25.android has composing api's and rendering api's
rendering is common for activity,movie playback etc
26.android uses gpu through libGLES_android.so
currently generic gui uses 2d rendering using skia.
generic gui doesnt use opengl.
Only explicit activities use opengl and hence gpus.
Friday, November 19, 2010
whats happening in graphics world
1.ATI vs NVIDIA
2.h264,hardware codec
3.http://labs.divx.com/DivX-H264-Decoder-DXVA
Note:
All OS's provide a graphics framework
A thirdparty provider can add their codec support in the framework
A thirdparty provider can add their hardware support in the framework
oem A can add their card with hardware codec-B support
oem B can add support for converting their proprietery format-C to codec-B again using the framework
similarly for A1,A2...
codec-B1,codec-B2 etc
frameworkB component and frameworkA component (frameworkB can be dll,frameworkA maybe .sys)
there will be a general OS framework(opencore),then for video and sound there can be other separate frameworks (video v4l,audio alsa)
opencore,gstreamer are os framework
2.h264,hardware codec
3.http://labs.divx.com/DivX-H264-Decoder-DXVA
Note:
All OS's provide a graphics framework
A thirdparty provider can add their codec support in the framework
A thirdparty provider can add their hardware support in the framework
oem A can add their card with hardware codec-B support
oem B can add support for converting their proprietery format-C to codec-B again using the framework
similarly for A1,A2...
codec-B1,codec-B2 etc
frameworkB component and frameworkA component (frameworkB can be dll,frameworkA maybe .sys)
there will be a general OS framework(opencore),then for video and sound there can be other separate frameworks (video v4l,audio alsa)
opencore,gstreamer are os framework
Thursday, November 18, 2010
wireless bridge repeat
1.bridge -- routers comm with each other over wifi,no clients can connect,use ethernet ports to transfer data from one AP to next
2.repeater --- aps can connect among themselves and connect from a laptop with client
1.wg 602 - g -- 1 lan
2.wn 802 - N --
3.wpn 802 -- g --
ADSL pci card ubuntu
1.conexant adsl pci modem
pppoe
voice modem ubuntu asterisk
2.Digium x100p
Linksys SPA3102
D500
setup voip phone to pstn switch
1.http://asteriskathome.sourceforge.net/handbook/#Section_3.2
2.repeater --- aps can connect among themselves and connect from a laptop with client
1.wg 602 - g -- 1 lan
2.wn 802 - N --
3.wpn 802 -- g --
ADSL pci card ubuntu
1.conexant adsl pci modem
pppoe
voice modem ubuntu asterisk
2.Digium x100p
Linksys SPA3102
D500
setup voip phone to pstn switch
1.http://asteriskathome.sourceforge.net/handbook/#Section_3.2
using vlc,darwin to stream to mobile
1.http://lists.apple.com/archives/streaming-server-users/2005/Jul/msg00097.html
2.http://wiki.videolan.org/Documentation:Streaming_HowTo/Streaming_a_live_video_feed_to_Darwin_Streaming_Server_for_Mobile_Phones
2.http://wiki.videolan.org/Documentation:Streaming_HowTo/Streaming_a_live_video_feed_to_Darwin_Streaming_Server_for_Mobile_Phones
firewire as a v4l device
1.https://bbs.archlinux.org/viewtopic.php?id=43204
big picture
1.vloopback is a converter for firewire type to v4l type device,install it
2.install firewire device
3.vloopback detects it
4.v4l detects it
5.firewire is used like a v4l node by any v4l application
1.vlc is an interface
2.v4l is the engine used by vlc
big picture
1.vloopback is a converter for firewire type to v4l type device,install it
2.install firewire device
3.vloopback detects it
4.v4l detects it
5.firewire is used like a v4l node by any v4l application
1.vlc is an interface
2.v4l is the engine used by vlc
linux v4l big picture
backend
1.v4l server is a in/out switch,dashboard
2.every capture device is a node on v4l, /dev/video0,/dev/dsp2
3.video is that nodes video details struct referred by symbolic name (details of type,channel table ,freq )
4.inputs ---- links a video details to capture device
frontend
1.input device --- stream encoding
2.output device ---- encoding audio/video
1.v4l server is a in/out switch,dashboard
2.every capture device is a node on v4l, /dev/video0,/dev/dsp2
3.video is that nodes video details struct referred by symbolic name (details of type,channel table ,freq )
4.inputs ---- links a video details to capture device
frontend
1.input device --- stream encoding
2.output device ---- encoding audio/video
firewire streaming from dvcam
1.http://rdfintrospector2.blogspot.com/2009/04/firewire-vlc-streaming.html
ubuntu firewire panasonic pvgs29
1.modprobe ohci1394
modprobe raw1394
chmod 777 /dev/raw1394
testlibraw
-->should show 1 node with cam not connected/switched off
-->will show 2 nodes with cam connected/on
remove other cards on pci slot
try all firewire ports one by one
2.
chmod 777 /etc/modules; echo "raw1394" >> /etc/modules; echo "dv1394" >> /etc/modules; echo "video1394" >> /etc/modules; chmod 644 /etc/modules;
3.
gedit /lib/udev/rules.d/50-udev-default.rules
in the #firewire section add the following line at the end:
KERNEL=="raw1394", GROUP="video"
4.
usermod -G video -a YourNameGoesHere;
5.
Then Reboot and your camcorder should be recognised by kdenlive or Kino and ready for capture
6.
apt-get install libiec61883-dev
apt-get install libavc1394-dev
aplay -L
apt-get install mplay
modprobe raw1394
chmod 777 /dev/raw1394
testlibraw
-->should show 1 node with cam not connected/switched off
-->will show 2 nodes with cam connected/on
remove other cards on pci slot
try all firewire ports one by one
2.
chmod 777 /etc/modules; echo "raw1394" >> /etc/modules; echo "dv1394" >> /etc/modules; echo "video1394" >> /etc/modules; chmod 644 /etc/modules;
3.
gedit /lib/udev/rules.d/50-udev-default.rules
in the #firewire section add the following line at the end:
KERNEL=="raw1394", GROUP="video"
4.
usermod -G video -a YourNameGoesHere;
5.
Then Reboot and your camcorder should be recognised by kdenlive or Kino and ready for capture
6.
apt-get install libiec61883-dev
apt-get install libavc1394-dev
aplay -L
apt-get install mplay
Tuesday, November 16, 2010
using android media framework
1.use java application and opencore
2.render mp4 with h264 and mp3 on sf and af using software accelerators
3.do the same using hw accelerators
4.test it using an incoming mp4 stream over network
5.where is the a and v demuxing happening
6.how many components are there in the pipeline
7.which component redirects the stream to sf and af
8.how are the controls interfaced with the pipeline
model
there is a loop,
it processes command packets first
then the data packets
data is both audio and video ...one after other
so a complete cycle .. process command,process video,process audio
2.render mp4 with h264 and mp3 on sf and af using software accelerators
3.do the same using hw accelerators
4.test it using an incoming mp4 stream over network
5.where is the a and v demuxing happening
6.how many components are there in the pipeline
7.which component redirects the stream to sf and af
8.how are the controls interfaced with the pipeline
model
there is a loop,
it processes command packets first
then the data packets
data is both audio and video ...one after other
so a complete cycle .. process command,process video,process audio
android media framework 1
1.mfw has a graph
2.graph has nodes
3.nodes are source or sink they are encoder,decoder,parser,modifier
4.source or sink are software or hardware
5.frame comes from source and goes into sink
6.frames are put into command queue of each node
7.each hw/sw node has upper edge OMX interface
8.a node is a .so
9.init is done using .cfg files
10. .so contains cfg i/f, omx wrapper,
11. every .so contains player engine, player driver invoke this engine
12.fw searches ./system/etc/01_Vendor_ti_omx.cfg for hw codec
not found uses SW codecs from PVOMX components (picked up by ./system/etc/pvplayer.cfg)
You can disable Hardware acceleration by editing this file: platform/vendor/ti/zoom2/BoardConfig.mk
take a look at the supported OMX roles (tComponentName): platform/hardware/ti/omx/system/src/openmax_il/omx_core/src/OMX_core.c
13.hardware codec === hardware accelerators
14.they are different from display accelerators
2.graph has nodes
3.nodes are source or sink they are encoder,decoder,parser,modifier
4.source or sink are software or hardware
5.frame comes from source and goes into sink
6.frames are put into command queue of each node
7.each hw/sw node has upper edge OMX interface
8.a node is a .so
9.init is done using .cfg files
10. .so contains cfg i/f, omx wrapper,
11. every .so contains player engine, player driver invoke this engine
12.fw searches ./system/etc/01_Vendor_ti_omx.cfg for hw codec
not found uses SW codecs from PVOMX components (picked up by ./system/etc/pvplayer.cfg)
You can disable Hardware acceleration by editing this file: platform/vendor/ti/zoom2/BoardConfig.mk
take a look at the supported OMX roles (tComponentName): platform/hardware/ti/omx/system/src/openmax_il/omx_core/src/OMX_core.c
13.hardware codec === hardware accelerators
14.they are different from display accelerators
android media framework
1.
level 1 -- MIO source/sink
level 0 -- OMX source/sink (decoder)
level -1(sink) -- surfaceflinger,audioflinger
level -1(source) -- raw hw,encoded stream,decoded stream,hw
2.MIO is like switch box
3.playback is determined by global clock and stream packet timestamps,at any momemt the timestamp
has to be in sync with relative clock value
level 1 -- MIO source/sink
level 0 -- OMX source/sink (decoder)
level -1(sink) -- surfaceflinger,audioflinger
level -1(source) -- raw hw,encoded stream,decoded stream,hw
2.MIO is like switch box
3.playback is determined by global clock and stream packet timestamps,at any momemt the timestamp
has to be in sync with relative clock value
Monday, November 15, 2010
gstreamer and android
1.get some version of android
2.get the gstreamer port
3.integrate
4.connect to android using adb
5.start gst-launch on adb
6.the video will appear on the android screen on the emulator
Desired checkpoints in android media framework
1.from source store stream into file
2.from file send stream into sink
3.list available sinks
4.list property of source stream
5.list relation between software sink and hardware sink and its properties
2.get the gstreamer port
3.integrate
4.connect to android using adb
5.start gst-launch on adb
6.the video will appear on the android screen on the emulator
Desired checkpoints in android media framework
1.from source store stream into file
2.from file send stream into sink
3.list available sinks
4.list property of source stream
5.list relation between software sink and hardware sink and its properties
basics of audio and video
1.useful apps
mplayer
mencoder
aplay
arecord
amixer
2.stack
mythtv
alsa v4l
oss capture cards(/dev/video)
capture cards(/dev/dsp)
capture cards(/dev/master,/dev/mixer)
3.hardware cards encode raw stream and give back encoded format packet(mp4)
4.hardware has adc,dsp
5.a mixer is associated with a single audio card
6.the codec softwares are stored in specific folders in linux,all standard players will look
for them in these folders.It has both hardware and software codecs.
7.capture cards has analog inputs as wav,digital out as mp3,analog out as ac3, digital in as h264 etc
8.all hadware codec capture cards,will defenitely expose one interface through which raw packets can be obtained.
9.a capture card takes raw packet,encodes it and stores temporarily in pc,this is taken and later transcoded and played in player or given over network and routed back to capture card
10./dev/pcmxxx is mixer for individual card,/dev/master is root mixer of alsa made from different /dev/pcmxxx
11.multimedia ---- files, containers, data
files ..../abc.mp4
container ..... mp4
data ....... video rtjpeg/mpeg4,audio mp3
12.
xvid/divx
h264
mplayer
mencoder
aplay
arecord
amixer
2.stack
mythtv
alsa v4l
oss capture cards(/dev/video)
capture cards(/dev/dsp)
capture cards(/dev/master,/dev/mixer)
3.hardware cards encode raw stream and give back encoded format packet(mp4)
4.hardware has adc,dsp
5.a mixer is associated with a single audio card
6.the codec softwares are stored in specific folders in linux,all standard players will look
for them in these folders.It has both hardware and software codecs.
7.capture cards has analog inputs as wav,digital out as mp3,analog out as ac3, digital in as h264 etc
8.all hadware codec capture cards,will defenitely expose one interface through which raw packets can be obtained.
9.a capture card takes raw packet,encodes it and stores temporarily in pc,this is taken and later transcoded and played in player or given over network and routed back to capture card
10./dev/pcmxxx is mixer for individual card,/dev/master is root mixer of alsa made from different /dev/pcmxxx
11.multimedia ---- files, containers, data
files ....
container ..... mp4
data ....... video rtjpeg/mpeg4,audio mp3
12.
xvid/divx
h264
practical devices
Develpment board ----- Arduino Mega 2560 ------------- price ~50$
uart camera module ----- µCAM529-TTL ------------- price ~4500Rs
uart gprs module --------- SIm300 Module ---------- price ~3000Rs
uart gps module --------- EM-408 GPS module -------- price ~4000Rs
hawk board ----~80$
moxa usb-4uart ---250$
and others
uart camera module ----- µCAM529-TTL ------------- price ~4500Rs
uart gprs module --------- SIm300 Module ---------- price ~3000Rs
uart gps module --------- EM-408 GPS module -------- price ~4000Rs
hawk board ----~80$
moxa usb-4uart ---250$
and others
mythtv
1.has 3 software components
mythtv-setup
mythtvbackend
mythtvfrontend
mythtv-setup is run first
input tv tuner card
scan channels
input audio input device -- dsp
then mythfilldatabase
mythtvbackend is run next
in another system mythfrontend is run
in setup recording profile --->check video and audio codec
add a separate video card with S video out and connect to tv
mythtv-setup
mythtvbackend
mythtvfrontend
mythtv-setup is run first
input tv tuner card
scan channels
input audio input device -- dsp
then mythfilldatabase
mythtvbackend is run next
in another system mythfrontend is run
in setup recording profile --->check video and audio codec
add a separate video card with S video out and connect to tv
Thursday, November 11, 2010
microcotrl,microprocess,multi-cpu,dsp boards and breadboards
1.http://opencircuits.com/Demo_board
2.http://electronics.stackexchange.com/questions/5658/searching-for-atmel-arm-mcu-in-a-breadboard-setup-like-arduino-nano-but-obvio
3.http://www.embeddedrelated.com/usenet/embedded/show/115403-1.php
4.http://www.fpga4fun.com/
5.http://www.signalware.com/dsp/index.php
6.http://e2e.ti.com/support/dsp/tms320c5000_power-efficient_dsps/f/109/p/70468/255817.aspx
7.http://www.futurlec.com/SMD_Adapters.shtml
8.http://www.twinind.com/catalog_detail.php?id=163
9.http://sm-breadboard.eu/
10.http://8515.avrfreaks.net/index.php?name=PNphpBB2&file=viewtopic&t=88345&view=next
11.http://thedailyreviewer.com/hardware/view/minipci-breadboard-fpga-111146072
12.http://www.electronics-related.com/usenet/design/show/142861-1.php
13.http://www.edaboard.com/thread19226.html
2.http://electronics.stackexchange.com/questions/5658/searching-for-atmel-arm-mcu-in-a-breadboard-setup-like-arduino-nano-but-obvio
3.http://www.embeddedrelated.com/usenet/embedded/show/115403-1.php
4.http://www.fpga4fun.com/
5.http://www.signalware.com/dsp/index.php
6.http://e2e.ti.com/support/dsp/tms320c5000_power-efficient_dsps/f/109/p/70468/255817.aspx
7.http://www.futurlec.com/SMD_Adapters.shtml
8.http://www.twinind.com/catalog_detail.php?id=163
9.http://sm-breadboard.eu/
10.http://8515.avrfreaks.net/index.php?name=PNphpBB2&file=viewtopic&t=88345&view=next
11.http://thedailyreviewer.com/hardware/view/minipci-breadboard-fpga-111146072
12.http://www.electronics-related.com/usenet/design/show/142861-1.php
13.http://www.edaboard.com/thread19226.html
Sunday, November 7, 2010
solar kit : wireless
1.TIs solar radio combo
http://focus.ti.com/docs/toolsw/folders/print/ez430-rf2500-seh.html
2.a private zigbee based solar
http://chezphil.org/slugbee/
http://focus.ti.com/docs/toolsw/folders/print/ez430-rf2500-seh.html
2.a private zigbee based solar
http://chezphil.org/slugbee/
engineering ladder
1.should know the options available to achieve a goal
2.should know important parameters of the options
3.should decide what parameters are for needed for goal
4.make a choice from the options
5.make execution plan
6.validate the plan in intervals
7.bound it by time
4.decide on cost,availability,skill,time etc
2.should know important parameters of the options
3.should decide what parameters are for needed for goal
4.make a choice from the options
5.make execution plan
6.validate the plan in intervals
7.bound it by time
4.decide on cost,availability,skill,time etc
Saturday, November 6, 2010
motherboard standoff screw
1.gives clearence to board from surface
Friday, November 5, 2010
bow arrow model of softw development
1.visualize the software as a reverse pyramid
2.the tip of the pyramid is assumed to be a target, that arrows originating from different layers
of the pyrm is supposed to hit
3.the layer near to the target is supposed to hit first
4.At the start of the development few arrows are left from the bow,after a lag all the other layers start leaving the arrows,the arrows travel down the layers and wait for its window to hit the target
5.hitting the target symbolizes module completion and delivery
6.this happens wave after wave in an iterative fashion till the entire delivery is completed
7. planning if arrows in the nearlayer is delayed and upper layer overtakes it,also window ordering is also important
2.the tip of the pyramid is assumed to be a target, that arrows originating from different layers
of the pyrm is supposed to hit
3.the layer near to the target is supposed to hit first
4.At the start of the development few arrows are left from the bow,after a lag all the other layers start leaving the arrows,the arrows travel down the layers and wait for its window to hit the target
5.hitting the target symbolizes module completion and delivery
6.this happens wave after wave in an iterative fashion till the entire delivery is completed
7. planning if arrows in the nearlayer is delayed and upper layer overtakes it,also window ordering is also important
method
1.choose a tech
2.frame what its supposed to do
3.create a thin air model of the tech
4.collect facts,correlate facts with fundemental rules of the tech
5.create more complex ones
2.frame what its supposed to do
3.create a thin air model of the tech
4.collect facts,correlate facts with fundemental rules of the tech
5.create more complex ones
india civic and business todo
1.apartment docs
2.scope of documents and its usefulness max
3.recurring charges on apartments
4.ownership patterns
1.employees,managing body,link with government,link to law,start time assets,asset accumulation,balance sheet,shares,capital raising,funding,take over,ipo
day to day expenses
revenue
taxes
1.http://www.propertysamachar.com/2010/06/18/documents-to-see-before-buying-an-apartment-in-bangalore/
2.
payment
loan + others
calculate the emi u can afford
adjust the loan availed such that bank gives u a loan so that emi matches
rest put from hand
with the developer
may have to pay the lawyer for checking
may have to pay the bank for processing
first stage pay the advance
next stage transfer the cheque from bank to developer
first stage receipt is got from developer
second stage the photo copy of sale deed is with u and the original is taken by bank and cheque
goes with developer(three way atomic??)
always check the web for review about the builder
parking
khata
oc
2.scope of documents and its usefulness max
3.recurring charges on apartments
4.ownership patterns
1.employees,managing body,link with government,link to law,start time assets,asset accumulation,balance sheet,shares,capital raising,funding,take over,ipo
day to day expenses
revenue
taxes
1.http://www.propertysamachar.com/2010/06/18/documents-to-see-before-buying-an-apartment-in-bangalore/
2.
payment
loan + others
calculate the emi u can afford
adjust the loan availed such that bank gives u a loan so that emi matches
rest put from hand
with the developer
may have to pay the lawyer for checking
may have to pay the bank for processing
first stage pay the advance
next stage transfer the cheque from bank to developer
first stage receipt is got from developer
second stage the photo copy of sale deed is with u and the original is taken by bank and cheque
goes with developer(three way atomic??)
always check the web for review about the builder
parking
khata
oc
hard water deposits
1.http://www.mouthshut.com/review/Choosing-a-Dishwasher-rurtllqtlm-1.html
2.http://answers.yahoo.com/question/index?qid=20070922110349AA34JYz
3.deposits on tiles,plastics,fixtures,glass,stainless steel
lime,limescale(calcium,magnesium),mineral deposits
4.http://www.ehow.com/how_4867824_seal-bathroom-tile.html
5.http://www.slideshare.net/sambrown12/how-to-keep-your-kitchen-and-bathroom-stainfree
6.http://www.realsimple.com/home-organizing/cleaning/bathroom/cleaning-bathroom-accessories-00000000001159/index.html
7.http://www.realsimple.com/home-organizing/cleaning/conquer-household-odors-10000001067776/page10.html
8.replace nails with adhesives
http://trade.indiamart.com/details.mp?offer=1195664991
http://solutions.3mindia.co.in/wps/portal/3M/en_IN/3MAdhesivesTapes/Home/Product_Information/Where_to_Buy/
9.3m adhesive to hang whiteboard
http://hubpages.com/hub/Removable-Wall-Hook-Uses-Creative-Ways-To-Use-Removable-Wall-Hooks
http://www.command.com/wps/portal/3M/en_US/NACommand/Command/Products/Product-Catalog/
http://www.reallygoodstuff.com/product/20+magnet+hook.do?sortby=ourPicks&from=Search
2.http://answers.yahoo.com/question/index?qid=20070922110349AA34JYz
3.deposits on tiles,plastics,fixtures,glass,stainless steel
lime,limescale(calcium,magnesium),mineral deposits
4.http://www.ehow.com/how_4867824_seal-bathroom-tile.html
5.http://www.slideshare.net/sambrown12/how-to-keep-your-kitchen-and-bathroom-stainfree
6.http://www.realsimple.com/home-organizing/cleaning/bathroom/cleaning-bathroom-accessories-00000000001159/index.html
7.http://www.realsimple.com/home-organizing/cleaning/conquer-household-odors-10000001067776/page10.html
8.replace nails with adhesives
http://trade.indiamart.com/details.mp?offer=1195664991
http://solutions.3mindia.co.in/wps/portal/3M/en_IN/3MAdhesivesTapes/Home/Product_Information/Where_to_Buy/
9.3m adhesive to hang whiteboard
http://hubpages.com/hub/Removable-Wall-Hook-Uses-Creative-Ways-To-Use-Removable-Wall-Hooks
http://www.command.com/wps/portal/3M/en_US/NACommand/Command/Products/Product-Catalog/
http://www.reallygoodstuff.com/product/20+magnet+hook.do?sortby=ourPicks&from=Search
voltage converters for countries
1.black and decker driver of US voltage 110v has to be run on indian voltage of 220v
http://www.starkelectronic.com/st500.htm
ST-750 Max,Power 750 watts,~120V * 5.2A
2.India brands : http://www.maxineindia.com/tran_stepdown.htm
3.india uses 50hz cycle , and US uses 60 hz
4.Maxine 1500w ~4k
5.some options
http://www.hifivision.com/what-should-i-buy/8264-suggest-220-110v-step-down-converter.html
http://www.starkelectronic.com/st500.htm
ST-750 Max,Power 750 watts,~120V * 5.2A
2.India brands : http://www.maxineindia.com/tran_stepdown.htm
3.india uses 50hz cycle , and US uses 60 hz
4.Maxine 1500w ~4k
5.some options
http://www.hifivision.com/what-should-i-buy/8264-suggest-220-110v-step-down-converter.html
fundemental possibilities
1.maybe in optic digital science
1.color A and B represents 1 and 0
2.color C and D represents control(start frame and stop frame)
2.Maybe relative motions can generate electricity,magnets placed under arms and legs
3.passing small amounts of electricity in a magnetic field and circular motion can somehow be
used to generate an airlift for micro devices .... if the power for this can be reduced to the
level of solar power then , it can create revolutionary surveillance devices
4.micro oscillators that can tune to existing cellular frequencies and generate feedback surplus potential to run chips can begin the era of packet power technology.It will add new dimension to cellular communication.
May be inter stellar radio energy can be used to generate infinite power source
5.plants grown at home should be able to generate tiny levels of power,which should be able to
generate potential for driving chips.Is there any plant that protects itself by giving shock to
its predators
======
1.approach for research at home :
1.small economical measurement and research kits : tools for measurement eg: magnetic parameters
electrical,mechanical,radio etc
2.internet : to read,propose,publish,feedback
3.infrastructure : power sources,pc
1.color A and B represents 1 and 0
2.color C and D represents control(start frame and stop frame)
2.Maybe relative motions can generate electricity,magnets placed under arms and legs
3.passing small amounts of electricity in a magnetic field and circular motion can somehow be
used to generate an airlift for micro devices .... if the power for this can be reduced to the
level of solar power then , it can create revolutionary surveillance devices
4.micro oscillators that can tune to existing cellular frequencies and generate feedback surplus potential to run chips can begin the era of packet power technology.It will add new dimension to cellular communication.
May be inter stellar radio energy can be used to generate infinite power source
5.plants grown at home should be able to generate tiny levels of power,which should be able to
generate potential for driving chips.Is there any plant that protects itself by giving shock to
its predators
======
1.approach for research at home :
1.small economical measurement and research kits : tools for measurement eg: magnetic parameters
electrical,mechanical,radio etc
2.internet : to read,propose,publish,feedback
3.infrastructure : power sources,pc
Thursday, November 4, 2010
future : software defined radios
1.The lone runner in the space ---- lyrtech
2.
http://focus.ti.com/docs/solution/folders/print/357.html?DCMP=dsp_software&HQS=Other+IL+sdr#Product%20Bulletin%20and%20White%20Papers
http://www.lyrtech.com/Products/SFF_SDR_evaluation_module.php
http://www.lyrtech.com/Products/SFF_SDR_development_platforms.php
http://video.google.com/videoplay?docid=5846031950696380489# -- gsm demo
3.This contains
1.basic module ---- fpga(code) + dsp
2.Advanced DAC ---- fpga(code) + DAC
3.RF module
4.Basic module is priced at 2500$
with others it is at 10000$
5.current support in Mhz range
they will come up with support for:
WIMAX --- 0.9 to 2 ghz
LTE --- 3 to 4 ghz (both read as placeholders)
6.Currently there is support for only systemC to write fpga code. No luck to layman c coders the syntax is different ...... things will change dramatically if KR c could be used write fpga
the price of fpga will come down from lac's to some thousands .... and probably affordable to many
Is it so difficult to create a bridge between C and systemC, I would like to implement my second brain hw using it
7.a collection of projects
http://f4dan.free.fr/sdr_eng.html
upto date gnu radio hardware -- http://www.docstoc.com/docs/25417716/GNU-RADIO-INTRODUCTION
8.price of gnu radio,daughter board -http://old.nabble.com/USRP2-Price-td19399125.html
9.fundemental sdr research --http://docs.google.com/viewer?a=v&q=cache:UBrs-UOGzRIJ:www.eecis.udel.edu/~manicka/Research/NaveenManicka_Thesis.pdf+gnuradio,phy&hl=en&gl=in&pid=bl&srcid=ADGEEShth_IWy-VkDFjpy5RotB2anNbB8UOqDoCMpCEDvkQM0cmCtIb0wjYCpbDHL39BZE6i1BUJB8z8-EghqAew6Rj5giA2l-kz49RDr0_0UbET3_BIxKZf-LhlHgm28RkJ46vJuP62&sig=AHIEtbT4_8tuPT8_ofbNcqQMHawrPSK-9g
2.
http://focus.ti.com/docs/solution/folders/print/357.html?DCMP=dsp_software&HQS=Other+IL+sdr#Product%20Bulletin%20and%20White%20Papers
http://www.lyrtech.com/Products/SFF_SDR_evaluation_module.php
http://www.lyrtech.com/Products/SFF_SDR_development_platforms.php
http://video.google.com/videoplay?docid=5846031950696380489# -- gsm demo
3.This contains
1.basic module ---- fpga(code) + dsp
2.Advanced DAC ---- fpga(code) + DAC
3.RF module
4.Basic module is priced at 2500$
with others it is at 10000$
5.current support in Mhz range
they will come up with support for:
WIMAX --- 0.9 to 2 ghz
LTE --- 3 to 4 ghz (both read as placeholders)
6.Currently there is support for only systemC to write fpga code. No luck to layman c coders the syntax is different ...... things will change dramatically if KR c could be used write fpga
the price of fpga will come down from lac's to some thousands .... and probably affordable to many
Is it so difficult to create a bridge between C and systemC, I would like to implement my second brain hw using it
7.a collection of projects
http://f4dan.free.fr/sdr_eng.html
upto date gnu radio hardware -- http://www.docstoc.com/docs/25417716/GNU-RADIO-INTRODUCTION
8.price of gnu radio,daughter board -http://old.nabble.com/USRP2-Price-td19399125.html
9.fundemental sdr research --http://docs.google.com/viewer?a=v&q=cache:UBrs-UOGzRIJ:www.eecis.udel.edu/~manicka/Research/NaveenManicka_Thesis.pdf+gnuradio,phy&hl=en&gl=in&pid=bl&srcid=ADGEEShth_IWy-VkDFjpy5RotB2anNbB8UOqDoCMpCEDvkQM0cmCtIb0wjYCpbDHL39BZE6i1BUJB8z8-EghqAew6Rj5giA2l-kz49RDr0_0UbET3_BIxKZf-LhlHgm28RkJ46vJuP62&sig=AHIEtbT4_8tuPT8_ofbNcqQMHawrPSK-9g
Subscribe to:
Posts (Atom)