Posted by Navraj Nandra on March 26th, 2013
Embedded Vision is a hot application these days. For those of you who have kids, you may have experienced already a lot of it from the gesture recognition area with the Kinect box. Samsung’s Galaxy S4 sports a camera that detects your finger pointing in the air to steer the user interface. And sooner than you expect your car will automatically break if you don’t notice the pedestrian crossing the street. What do all of these applications have in common? All of them have a camera sensor that is followed by sophisticated, real-time image processing that extracts the information that is relevant for the application.
In many cases for example (the pedestrian is a good case for that) it is sufficient to detect the edges of an object in the first place. I had the chance to experience my own edges recently as I walked up to an embedded vision processor demonstration on the 2013 SNUG Design Community Expo show floor this week. My body contour showed up as a spider web of white lines on the monitor in real-time as I was moving in front of the camera screen. What I actually saw was a specialized processor designed with a nifty tool, Synopsys Processor Designer. The embedded vision processor had been implemented on a HAPS(r) prototyping system that also had the daughter card with the HDMI outputs driving the screen and the AVI input from the camera. I was told that my image could be slightly improved by fine tuning the algorithm parameters, but from an edge detection perspective I was looking just fine….