![learning to program with robotc amazon learning to program with robotc amazon](https://s3.ap-northeast-1.amazonaws.com/gimg.gateimg.com/learn/dfbbeb8b2f474ba03e9bc5e1086940840730028a.jpg)
Once Kadlec and his colleagues succeed in the full-scale deployment of autonomous mobile robot fleets that can transport precious, oversized packages, they can apply the learnings to additional robots. We see a clear path to being successful.” “We can see the future scale that we want to be operating at. There, they are moving packages, collecting more data, and delivering insights to the science team on how to improve their real-world performance. Kadlec and his team have deployed a few dozen robots for preliminary testing and refinement at a few fulfillment centers. “And that representation is passed down in such a way that the robot can plan accordingly to, on one hand, avoid static obstacles and on the other hand avoid dynamic obstacles.” “Our work is improving the representation of static obstacles in the present as well as starting to model the near future of where the dynamic obstacles are going to be,” said Gueguen.
![learning to program with robotc amazon learning to program with robotc amazon](https://mindxmaster.s3.amazonaws.com/wp-content/uploads/2024/02/img_65df0f2194a4c.jpg)
The team is working on another model to predict the paths of the people the robot encounters, and adjust course accordingly. When it detects a pillar, for example, it knows that pillars are static and will always be there. Layered on top of the semantic understanding are predictive models that teach the robot how to treat each object detected. Data scientists use this labeled data to train a machine learning model that segments and labels each object in the cameras’ field of view, a process known as semantic segmentation. Then, teams trace the shape of each object in each image and label it. To teach the robots semantics, scientists collected thousands of images taken by the robots as they navigated. Preliminary tests show a prototype pinch-grasping robot achieved a 10-fold reduction in damage on items such as books and boxes. The only way the robot can know that is if it’s able to identify, ‘Oh that’s the trash can or that’s the person.’ And that’s what our AI is able to do.” “The intuition is very simple: The way a robot moves around a trash can is probably going to be different from the way it navigates around a person or a precious asset. “The navigation system does what we call semantically aware planning and navigation,” said Srinivasa. When these labels are layered on top of the three-dimensional visual representation, the robot can then classify the point in space as stable or mobile and use that information to calculate the safest path to its destination. Or, if it’s a cable lying across the floor, or a forklift, or another robot.
![learning to program with robotc amazon learning to program with robotc amazon](https://sg.element14.com/productimages/large/en_GB/3223701-502.jpg)
Semantic understanding, he continued, is about teaching the robot to define that point in space - to determine if it belongs to a person, a pod, or a pillar. The robot’s AI can differentiate between stationary and moving obstacles by layering semantics on top of sensor data so the robot behaves differently around people, pallets, or pillars in a warehouse. But that is the only knowledge the robot has without semantic understanding.” “So, it knows at that distance, there are points in space - an obstacle of some sort. “When the robot takes a picture of the world, it gets pixel values and depth measurements,” explained Lionel Gueguen, an Amazon Robotics AI machine learning applied scientist. The process begins with semantic understanding, or scene comprehension, based on data collected with the robot’s cameras and LIDAR. Kadlec and his team use machine learning. We humans learn about the objects in our environment and how to safely navigate around them through curiosity and trial and error, along with the guidance of family, friends, and teachers. That has to do with first recognizing that this thing in front of you is a human and it might move, you might need to keep a further distance from it to be safe, you might need to predict the direction the human is going.” “The other one is working in close proximity with humans. “Navigating through those dynamic spaces is one aspect of the challenge,” he said. To succeed, the robots need to be able to map their environment in real-time and understand what’s a stationary object - and what’s not - and use that information to make on-the-fly decisions about where to go, and how to avoid collisions to safely deliver the oversized items to their intended destinations. His team has deployed the robots for preliminary testing as autonomous transports for non-conveyable items. Robots operating in Amazon warehouses must work in an always changing environment in close proximity to people, pallets, and other obstacles.īen Kadlec, perception lead for Amazon Robotics AI, is leading the development of the AI for the new robots.