News
From self-folding robots, to robotic endoscopes, to better methods for computer vision and object detection, researchers at the University of California San Diego have a wide range of papers and ...
Now, researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a computer vision system that can identify objects it has never seen before. Not to ...
Google DeepMind recently announced Robotics Transformer 2 (RT-2), a vision-language-action (VLA) AI model for controlling robots. RT-2 uses a fine-tuned LLM to output motion control commands. It can p ...
7mon
Tech Xplore on MSNTeaching a robot its limits to complete open-ended tasks safely
To a robot, though, the motto represents learning constraints, or limitations of a specific task within the machine's environment, to do chores safely and correctly. For instance, ...
In 2022, InfoQ covered both SayCan, which uses an LLM to output a high-level action plan, and Code-as-Policies, which uses an LLM to output low-level robot control code. PaLM-E is based on a pre ...
Artificial intelligence startup eYs3D Microelectronics Co. is bringing computer vision-based image processing capabilities to all manner of autonomous robotic applications and smart city devices with ...
Purdue University researchers in the School of Electrical and Computer Engineering are developing integrative language and vision software that may enable an autonomous robot to interact with people ...
Affordance-based manipulation is a way to reframe a manipulation task as a computer vision task. Rather than referencing pixels to object labels, it associates pixels with the value of actions.
The vision-free version of MIT’s Cheetah 3 robot can jump onto a 30-inch-high tabletop. (MIT via YouTube) Boston Dynamics’ scary-smart robots make use of sophisticated computer vision, but MIT ...
Results that may be inaccessible to you are currently showing.
Hide inaccessible results