Pavan Ramdya (left), Adam Gosztolai (center) and Semih Günel (right)
Pavan Ramdya ( left ), Adam Gosztolai ( center ) and Semih Günel ( right ) © EPFL - Scientists have developed a deep learning-based method called LiftPose3D, which can reconstruct 3D animal poses using only 2D poses from one camera. This method will have impact in neuroscience and bioinspired robotics. "When people perform experiments in neuroscience they have to make precise measurements of behavior," says Professor Pavan Ramdya at EPFL's School of Life Sciences, who led the study. His group has now published a paper presenting new software that can simplify one of neuroscience's most crucial yet laborious tasks: capturing 3D models of freely moving animals. This tool allows them to study the brain mechanisms controlling body movements. This goal of reverse-engineering biological behavior has far-reaching applications in robotics and AI. "In the past, we used a deep neural network to perform this kind of 'pose estimation' in animals," says Ramdya, referring to the process by which a computer can predict the positions of body parts in camera images.
TO READ THIS ARTICLE, CREATE YOUR ACCOUNT
And extend your reading, free of charge and with no commitment.
Your Benefits
- Access to all content
- Receive newsmails for news and jobs
- Post ads