This year, 2019, we're celebrating the 50th Anniversary of the Apollo 11 moon landing. In 1969, this was a monumental achievement, especially if you consider that the computers in the Apollo 11 lander had 1,300 times less power than an iPhone 5s.
Technology has advanced and changed since we first landed astronauts on the moon and today, artificial intelligence is becoming an invaluable tool. How is NASA using AI for space exploration and how will this technology carry us to the stars?
Right now, astronauts communicate with Earth through the use of radio waves. This works well enough for talking to the International Space Station because it's in a low earth orbit, but it won't be able to support real-time communication as we move further out into the cosmos.
For Artemis astronauts on the moon, the delay will be barely noticeable — 1.27 seconds, give or take a few nanoseconds — but once we get out to Mars and beyond, it'll take upwards of 13 minutes to send or receive a message.
AI could change the way we look at interstellar communication, by accessing underutilized portions of the electromagnetic spectrum that we use for communication. These aren't used by any other existing communications technology, so NASA doesn't need to worry about their messages to and from space interfering with anyone else's phone calls or texts.
With enough information, a cognitive data network like this one could even suggest new data pathways if existing ones become clogged to facilitate communication and ensure that important or mission-critical information never gets lost in outer space.
Once you move outside of the Earth's network of GPS satellite, the technology can't help you find your way anymore. This means that if astronauts get lost in space, they are really — REALLY — lost. Researchers are using AI to potentially help the Artemis astronauts navigate on the lunar surface.
Instead of relying on global positioning satellites like you would on Earth, the astronauts will be able to take a picture of their surroundings and feed it into a database that contains millions of images of the moon's surface. The AI can then match the new picture with the existing database and tell the astronauts exactly where they are on the moon.
This technology relies on machine learning, but it will be able to essentially create a virtual moon that can help astronauts navigate. This will become essential when the first Artemis astronauts head to the Moon, hopefully in 2024, and later as we develop a permanent presence on or closest stellar neighbor.
Monitoring Spacecraft Health
Modern spacecraft are marvels of aerospace engineering and have more than 1,000 different telemetry variables that the ground grew monitors at any given time. One thing going wrong could result in the failure of the entire spacecraft, but keeping up with all of those variables is challenging for even the most talented ground crew.
Artificial intelligence might never replace ground control, but its ability to process massive amounts of data could prove invaluable when it comes to monitoring spacecraft health both during launch and in flight.
The Jet Propulsion Laboratory (JPL) has been working on an automated spacecraft health monitoring program since the early 1990s. Dubbed SHARP — Spacecraft Health Automated Reasoning Prototype — this system could provide the foundation for modern AI spacecraft monitoring
The Future of AI and Space Exploration
Artificial intelligence as an industry is still relatively new, but the potential applications are already starting to excite space lovers around the globe.
Right now, NASA is working to take humans back to the Moon by 2024. This deadline is already proving challenging. Implementing AI system could help to take some of the load, making it easier for NASA to reach their 2024 deadline and beyond.
At the Deep Learning Summit in London this September, Shreyansh Daftry, Research Scientist at NASA JPL will be sharing his most current work in Deep Learning for Space Exploration. Shreyansh Daftry is a Research Scientist at NASA Jet Propulsion Laboratory (JPL) in Pasadena, California, working at the intersection of Artificial Intelligence and Space Technology to help develop the next generation of robots for Earth, Mars and beyond. Shreyansh received his M.S. degree in Robotics from the Robotics Institute, Carnegie Mellon University, USA in 2016, and his B.S. degree in Electronics and Communications Engineering in 2013. His research interests spans computer vision, machine learning and autonomous robotics, with a focus on real-time computation, safety and adaptability. Register now to guarantee your place at the summit.