Robots are changing the world with a presence in just about every industry from agriculture to medicine and transport to retail. Due to popular demand, RE•WORK is hosting the Deep Learning for Robotics Summit in Boston this May 23 - 24 which will bring together influential researchers, disruptive startups and leading robotic companies to explore how we can improve robotic skills via applied deep learning. Advancements in reinforcement learning, computer vision and progressive neural networks are impacting robot development, and such progressions plus robot manufacturer innovations will build the next generation of robots impacting our world.
In advance of the event, we’re taking a look at what’s new in the space:
1. Researchers Are Now Giving The Sense Of Touch To Robots Through Deep Learning
Ever since it’s inception, artificial intelligence has been used to give better vision to machines. Thanks to AI and deep learning, machines are now developed enough to understand their surroundings through vision. With the robots already equipped sight or vision, researchers and AI enthusiasts are now focusing on other senses like touch. An AI enthusiast, technologist and the founder of Somatic — a company that specializes in deep learning image and neuro-linguistic programming models optimised for mobile applications — Jason Toy set a path to a new realm for advancement in AI with his new project.
2. Robots are becoming classroom tutors. But will they make the grade?
Pondering a tablet screen displaying a town scene, a pre-K student tilts her head to the side and taps her lip thoughtfully. “What are we trying to find?” asks the plush, red and blue robot called Tega that’s perched on the desk beside the girl. The bot resembles a teddy bear–sized Furby. “We are trying to find lavender-colored stuff,” the girl explains. Lavender is a new vocabulary word. “OK!” Tega chirps.
3. AI Robot is Learning to Draw Portraits
Artificial intelligence and robotics are turning many industries upside down, including fine arts. An AI machine has already painted a canvas borrowing from historical artwork, which sold in October 2018 for $432,500.A new robot called Ai-Da—named for British mathematician and inventor of computer algorithms Ada Lovelace—will do something much harder. The device will be able to draw, according to Reuters.
4. NHS should invest in digital, AI, and robotics training
The Topol Review of the training needs of the future NHS workforce has published its recommendations – including an increase in the number of clinicians trained to use digital, AI and robotics technologies.
Commissioned by the former health secretary Jeremy Hunt, the Topol Review is led by California-based scientist Dr Eric Topol, an expert in cardiology, genetics and digital medicine.Its overall remit is to identify the training needs of NHS staff in technology such as AI and digital medicine. According to the report, 90% of NHS staff will require some element of digital skills, and the entire report is guided by the principle that patients should be partners that are well-informed about health technologies.
5. Scientists Gave This Robot Arm a ‘Self Image’ and Watched it Learn
In The Matrix, Morpheus tells Neo that their digital appearance is based on their “residual self-image.” That is, the characters look how they imagine themselves to look, based on their own mental models of themselves.
In the real world, scientists have been trying to teach robots that trick as well. That’s because, unlike the warring machines of the matrix, a real-life robot with an accurate self-image might benefit humanity. It’d allow for faster programming and more accurate self-planning, and help a device self-diagnose when something’s gone wrong. It could even help a robot adapt to any damage it sustains.