Visual Processing for the Bionic Eye
Research and development of visual processing for low vision devices and the bionic eye.
In 2004, there were 50,000 people legally blind in Australia, with numbers expected to increase to 87,000 by 2024 with the ageing population ("Eye Research Australia Insight: The Economic Impact and Cost of Vision Loss in Australia" by Centre for Eye Research Australia).
Although there are many useful devices on the market to assist individuals with vision impairment, there is a lack of the type of sensor-based assistant systems that are newly appearing in cars (.e.g, lane departure, collision warning, navigation).
This project aims to develop a new generation of assistive devices (stand-alone wearable devices that individuals can use without medical intervention) based on computer vision processing: it will produce prototype devices that aim to demonstrate effective assistance for individuals with vision impairment.
VIBE is also contributing expertise in computer vision processing to Bionic Vision Australia which is funded by the Australian Research Council. The Bionic Vision Australia partnership aims to build the first Australian bionic eye implant whereby individuals may recover some of the lost vision via electrical stimulation of the retina. Vision processing will be one of the key components of a bionic eye as it will enable efficient encoding of high resolution images into a set of stimulation signals on a retinal implant.
a) the ambulatory navigation (e.g. obstacle avoidance)
b) the interaction with the environment (e.g. face detection and recognition, reading of text/symbols, etc) capability of vision-impaired people.
Who will benefit?
The Bionic Eye project targets two common retinal diseases:
Normal Vision Retinitis pigmentosa Age-related macular degeneration
(leading cause of blindness in developed countries) (prevalence of 1 in 5000)
The Bionic Vision Australia solution:
- Developing the first Australian retinal implant.
- Perceiving trip hazards is critical for safe mobility and an important ability for prosthetic vision. We take the approach of creating a visual representation that augments the appearance of obstacle in the environment.
- Trip hazards not marked by abrupt intensity change can be difficult to perceive using standard prosthetic vision scene representations.
- Current segmentation algorithms do not guarantee preservation of low contrast surface boundaries.
- We contribute:
– a system for highlighting trip hazards in prosthetic vision.
Augmented depth vs intensity based on 98 phosphenes (based on BVA's first generation 98 electrode implant)
original intensity based augmented depth
The approach has been evaluated in human
simulated prosthetic vision trials.
- Significantly improves performance in orientation and mobility.
- Surface extraction from iso-disparity contours, C. McCarthy, and N. Barnes, 2010 Asian Conference on Computer Vision (ACCV 2010)
- Ground surface segmentation for navigation with a visual prosthesis, C. McCarthy, N. Barnes and P. Lieby, 2011 IEEE Conference on Engineering in Medicine and Biology (EMBC 2011).
- Mobility Experiments Using Simulated Prosthetic Vision With 98 Phosphenes Of Limited Dynamic Range. P. Lieby, N. Barnes, C. McCarthy, A. Scott, V. Botea, J. Walker. (ARVO 2012).