Author: Exaltus.ai

AI

[ad_1] Our roads might one day be safer thanks to a completely new type of system that overcomes some of lidar’s limitations. Lidar, which uses pulsed lasers to map objects and scenes, helps autonomous robots, vehicles and drones to navigate their environment. The new system represents the first time that the capabilities of conventional beam-scanning lidar systems have been combined with those of a newer 3D approach known as flash lidar. In Optica, Optica Publishing Group’s journal for high-impact research, investigators led by Susumu Noda from Kyoto University in Japan describe their new nonmechanical 3D lidar system, which fits in…

Read More
AI

[ad_1] Players and coaches for the Philadelphia Eagles and Kansas City Chiefs will spend hours and hours in film rooms this week in preparation for the Super Bowl. They’ll study positions, plays and formations, trying to pinpoint what opponent tendencies they can exploit while looking to their own film to shore up weaknesses. New artificial intelligence technology being developed by engineers at Brigham Young University could significantly cut down on the time and cost that goes into film study for Super Bowl-bound teams (and all NFL and college football teams), while also enhancing game strategy by harnessing the power of…

Read More
AI

[ad_1] Engineers at the University of Waterloo have developed artificial intelligence (AI) technology to predict if women with breast cancer would benefit from chemotherapy prior to surgery. The new AI algorithm, part of the open-source Cancer-Net initiative led by Dr. Alexander Wong, could help unsuitable candidates avoid the serious side effects of chemotherapy and pave the way for better surgical outcomes for those who are suitable. “Determining the right treatment for a given breast cancer patient is very difficult right now, and it is crucial to avoid unnecessary side effects from using treatments that are unlikely to have real benefit…

Read More
AI

[ad_1] Carnegie Mellon University’s Robotics Institute has a new artist-in-residence. FRIDA, a robotic arm with a paintbrush taped to it, uses artificial intelligence to collaborate with humans on works of art. Ask FRIDA to paint a picture, and it gets to work putting brush to canvas. “There’s this one painting of a frog ballerina that I think turned out really nicely,” said Peter Schaldenbrand, a School of Computer Science Ph.D. student in the Robotics Institute working with FRIDA and exploring AI and creativity. “It is really silly and fun, and I think the surprise of what FRIDA generated based on…

Read More
AI

[ad_1] Large language models like OpenAI’s GPT-3 are massive neural networks that can generate human-like text, from poetry to programming code. Trained using troves of internet data, these machine-learning models take a small bit of input text and then predict the text that is likely to come next. But that’s not all these models can do. Researchers are exploring a curious phenomenon known as in-context learning, in which a large language model learns to accomplish a task after seeing only a few examples — despite the fact that it wasn’t trained for that task. For instance, someone could feed the…

Read More
AI

[ad_1] Can a pigeon match wits with artificial intelligence? At a very basic level, yes. In a new study, psychologists at the University of Iowa examined the workings of the pigeon brain and how the “brute force” of the bird’s learning shares similarities with artificial intelligence. The researchers gave the pigeons complex categorization tests that high-level thinking, such as using logic or reasoning, would not aid in solving. Instead, the pigeons, by virtue of exhaustive trial and error, eventually were able to memorize enough scenarios in the test to reach nearly 70% accuracy. The researchers equate the pigeons’ repetitive, trial-and-error…

Read More
AI

[ad_1] Humans naturally perform numerous complex tasks. These include sitting down, picking something up from a table, and pushing a cart. These activities involve various movements and require multiple contacts, which makes it difficult to program robots to perform them. Recently, Professor Eiichi Yoshida of the Tokyo University of Science has put forward the idea of an interactive cyber-physical human (iCPH) platform to tackle this problem. It can help understand and generate human-like systems with contact-rich whole-body motions. His work was published in Frontiers in Robotics and AI. Prof. Yoshida briefly describes the fundamentals of the platform. “As the name…

Read More