Connect with us

Technology

Cascading Domino Actuator Transports Objects with a Soliton Wave

Published

on

The ability to make decisions autonomously is not just what makes robots useful, it’s what makes robots
robots. We value robots for their ability to sense what’s going on around them, make decisions based on that information, and then take useful actions without our input. In the past, robotic decision making followed highly structured rules—if you sense this, then do that. In structured environments like factories, this works well enough. But in chaotic, unfamiliar, or poorly defined settings, reliance on rules makes robots notoriously bad at dealing with anything that could not be precisely predicted and planned for in advance.

RoMan, along with many other robots including home vacuums, drones, and autonomous cars, handles the challenges of semistructured environments through artificial neural networks—a computing approach that loosely mimics the structure of neurons in biological brains. About a decade ago, artificial neural networks began to be applied to a wide variety of semistructured data that had previously been very difficult for computers running rules-based programming (generally referred to as symbolic reasoning) to interpret. Rather than recognizing specific data structures, an artificial neural network is able to recognize data patterns, identifying novel data that are similar (but not identical) to data that the network has encountered before. Indeed, part of the appeal of artificial neural networks is that they are trained by example, by letting the network ingest annotated data and learn its own system of pattern recognition. For neural networks with multiple layers of abstraction, this technique is called deep learning.

Even though humans are typically involved in the training process, and even though artificial neural networks were inspired by the neural networks in human brains, the kind of pattern recognition a deep learning system does is fundamentally different from the way humans see the world. It’s often nearly impossible to understand the relationship between the data input into the system and the interpretation of the data that the system outputs. And that difference—the “black box” opacity of deep learning—poses a potential problem for robots like RoMan and for the Army Research Lab.

In chaotic, unfamiliar, or poorly defined settings, reliance on rules makes robots notoriously bad at dealing with anything that could not be precisely predicted and planned for in advance.

This opacity means that robots that rely on deep learning have to be used carefully. A deep-learning system is good at recognizing patterns, but lacks the world understanding that a human typically uses to make decisions, which is why such systems do best when their applications are well defined and narrow in scope. “When you have well-structured inputs and outputs, and you can encapsulate your problem in that kind of relationship, I think deep learning does very well,” says
Tom Howard, who directs the University of Rochester’s Robotics and Artificial Intelligence Laboratory and has developed natural-language interaction algorithms for RoMan and other ground robots. “The question when programming an intelligent robot is, at what practical size do those deep-learning building blocks exist?” Howard explains that when you apply deep learning to higher-level problems, the number of possible inputs becomes very large, and solving problems at that scale can be challenging. And the potential consequences of unexpected or unexplainable behavior are much more significant when that behavior is manifested through a 170-kilogram two-armed military robot.

After a couple of minutes, RoMan hasn’t moved—it’s still sitting there, pondering the tree branch, arms poised like a praying mantis. For the last 10 years, the Army Research Lab’s Robotics Collaborative Technology Alliance (RCTA) has been working with roboticists from Carnegie Mellon University, Florida State University, General Dynamics Land Systems, JPL, MIT, QinetiQ North America, University of Central Florida, the University of Pennsylvania, and other top research institutions to develop robot autonomy for use in future ground-combat vehicles. RoMan is one part of that process.

The “go clear a path” task that RoMan is slowly thinking through is difficult for a robot because the task is so abstract. RoMan needs to identify objects that might be blocking the path, reason about the physical properties of those objects, figure out how to grasp them and what kind of manipulation technique might be best to apply (like pushing, pulling, or lifting), and then make it happen. That’s a lot of steps and a lot of unknowns for a robot with a limited understanding of the world.

This limited understanding is where the ARL robots begin to differ from other robots that rely on deep learning, says Ethan Stump, chief scientist of the AI for Maneuver and Mobility program at ARL. “The Army can be called upon to operate basically anywhere in the world. We do not have a mechanism for collecting data in all the different domains in which we might be operating. We may be deployed to some unknown forest on the other side of the world, but we’ll be expected to perform just as well as we would in our own backyard,” he says. Most deep-learning systems function reliably only within the domains and environments in which they’ve been trained. Even if the domain is something like “every drivable road in San Francisco,” the robot will do fine, because that’s a data set that has already been collected. But, Stump says, that’s not an option for the military. If an Army deep-learning system doesn’t perform well, they can’t simply solve the problem by collecting more data.

ARL’s robots also need to have a broad awareness of what they’re doing. “In a standard operations order for a mission, you have goals, constraints, a paragraph on the commander’s intent—basically a narrative of the purpose of the mission—which provides contextual info that humans can interpret and gives them the structure for when they need to make decisions and when they need to improvise,” Stump explains. In other words, RoMan may need to clear a path quickly, or it may need to clear a path quietly, depending on the mission’s broader objectives. That’s a big ask for even the most advanced robot. “I can’t think of a deep-learning approach that can deal with this kind of information,” Stump says.

While I watch, RoMan is reset for a second try at branch removal. ARL’s approach to autonomy is modular, where deep learning is combined with other techniques, and the robot is helping ARL figure out which tasks are appropriate for which techniques. At the moment, RoMan is testing two different ways of identifying objects from 3D sensor data: UPenn’s approach is deep-learning-based, while Carnegie Mellon is using a method called perception through search, which relies on a more traditional database of 3D models. Perception through search works only if you know exactly which objects you’re looking for in advance, but training is much faster since you need only a single model per object. It can also be more accurate when perception of the object is difficult—if the object is partially hidden or upside-down, for example. ARL is testing these strategies to determine which is the most versatile and effective, letting them run simultaneously and compete against each other.

Perception is one of the things that deep learning tends to excel at. “The computer vision community has made crazy progress using deep learning for this stuff,” says Maggie Wigness, a computer scientist at ARL. “We’ve had good success with some of these models that were trained in one environment generalizing to a new environment, and we intend to keep using deep learning for these sorts of tasks, because it’s the state of the art.”

ARL’s modular approach might combine several techniques in ways that leverage their particular strengths. For example, a perception system that uses deep-learning-based vision to classify terrain could work alongside an autonomous driving system based on an approach called inverse reinforcement learning, where the model can rapidly be created or refined by observations from human soldiers. Traditional reinforcement learning optimizes a solution based on established reward functions, and is often applied when you’re not necessarily sure what optimal behavior looks like. This is less of a concern for the Army, which can generally assume that well-trained humans will be nearby to show a robot the right way to do things. “When we deploy these robots, things can change very quickly,” Wigness says. “So we wanted a technique where we could have a soldier intervene, and with just a few examples from a user in the field, we can update the system if we need a new behavior.” A deep-learning technique would require “a lot more data and time,” she says.

It’s not just data-sparse problems and fast adaptation that deep learning struggles with. There are also questions of robustness, explainability, and safety. “These questions aren’t unique to the military,” says Stump, “but it’s especially important when we’re talking about systems that may incorporate lethality.” To be clear, ARL is not currently working on lethal autonomous weapons systems, but the lab is helping to lay the groundwork for autonomous systems in the U.S. military more broadly, which means considering ways in which such systems may be used in the future.

The requirements of a deep network are to a large extent misaligned with the requirements of an Army mission, and that’s a problem.

Safety is an obvious priority, and yet there isn’t a clear way of making a deep-learning system verifiably safe, according to Stump. “Doing deep learning with safety constraints is a major research effort. It’s hard to add those constraints into the system, because you don’t know where the constraints already in the system came from. So when the mission changes, or the context changes, it’s hard to deal with that. It’s not even a data question; it’s an architecture question.” ARL’s modular architecture, whether it’s a perception module that uses deep learning or an autonomous driving module that uses inverse reinforcement learning or something else, can form parts of a broader autonomous system that incorporates the kinds of safety and adaptability that the military requires. Other modules in the system can operate at a higher level, using different techniques that are more verifiable or explainable and that can step in to protect the overall system from adverse unpredictable behaviors. “If other information comes in and changes what we need to do, there’s a hierarchy there,” Stump says. “It all happens in a rational way.”

Nicholas Roy, who leads the Robust Robotics Group at MIT and describes himself as “somewhat of a rabble-rouser” due to his skepticism of some of the claims made about the power of deep learning, agrees with the ARL roboticists that deep-learning approaches often can’t handle the kinds of challenges that the Army has to be prepared for. “The Army is always entering new environments, and the adversary is always going to be trying to change the environment so that the training process the robots went through simply won’t match what they’re seeing,” Roy says. “So the requirements of a deep network are to a large extent misaligned with the requirements of an Army mission, and that’s a problem.”

Roy, who has worked on abstract reasoning for ground robots as part of the RCTA, emphasizes that deep learning is a useful technology when applied to problems with clear functional relationships, but when you start looking at abstract concepts, it’s not clear whether deep learning is a viable approach. “I’m very interested in finding how neural networks and deep learning could be assembled in a way that supports higher-level reasoning,” Roy says. “I think it comes down to the notion of combining multiple low-level neural networks to express higher level concepts, and I do not believe that we understand how to do that yet.” Roy gives the example of using two separate neural networks, one to detect objects that are cars and the other to detect objects that are red. It’s harder to combine those two networks into one larger network that detects red cars than it would be if you were using a symbolic reasoning system based on structured rules with logical relationships. “Lots of people are working on this, but I haven’t seen a real success that drives abstract reasoning of this kind.”

For the foreseeable future, ARL is making sure that its autonomous systems are safe and robust by keeping humans around for both higher-level reasoning and occasional low-level advice. Humans might not be directly in the loop at all times, but the idea is that humans and robots are more effective when working together as a team. When the most recent phase of the Robotics Collaborative Technology Alliance program began in 2009, Stump says, “we’d already had many years of being in Iraq and Afghanistan, where robots were often used as tools. We’ve been trying to figure out what we can do to transition robots from tools to acting more as teammates within the squad.”

RoMan gets a little bit of help when a human supervisor points out a region of the branch where grasping might be most effective. The robot doesn’t have any fundamental knowledge about what a tree branch actually is, and this lack of world knowledge (what we think of as common sense) is a fundamental problem with autonomous systems of all kinds. Having a human leverage our vast experience into a small amount of guidance can make RoMan’s job much easier. And indeed, this time RoMan manages to successfully grasp the branch and noisily haul it across the room.

Turning a robot into a good teammate can be difficult, because it can be tricky to find the right amount of autonomy. Too little and it would take most or all of the focus of one human to manage one robot, which may be appropriate in special situations like explosive-ordnance disposal but is otherwise not efficient. Too much autonomy and you’d start to have issues with trust, safety, and explainability.

“I think the level that we’re looking for here is for robots to operate on the level of working dogs,” explains Stump. “They understand exactly what we need them to do in limited circumstances, they have a small amount of flexibility and creativity if they are faced with novel circumstances, but we don’t expect them to do creative problem-solving. And if they need help, they fall back on us.”

RoMan is not likely to find itself out in the field on a mission anytime soon, even as part of a team with humans. It’s very much a research platform. But the software being developed for RoMan and other robots at ARL, called Adaptive Planner Parameter Learning (APPL), will likely be used first in autonomous driving, and later in more complex robotic systems that could include mobile manipulators like RoMan. APPL combines different machine-learning techniques (including inverse reinforcement learning and deep learning) arranged hierarchically underneath classical autonomous navigation systems. That allows high-level goals and constraints to be applied on top of lower-level programming. Humans can use teleoperated demonstrations, corrective interventions, and evaluative feedback to help robots adjust to new environments, while the robots can use unsupervised reinforcement learning to adjust their behavior parameters on the fly. The result is an autonomy system that can enjoy many of the benefits of machine learning, while also providing the kind of safety and explainability that the Army needs. With APPL, a learning-based system like RoMan can operate in predictable ways even under uncertainty, falling back on human tuning or human demonstration if it ends up in an environment that’s too different from what it trained on.

It’s tempting to look at the rapid progress of commercial and industrial autonomous systems (autonomous cars being just one example) and wonder why the Army seems to be somewhat behind the state of the art. But as Stump finds himself having to explain to Army generals, when it comes to autonomous systems, “there are lots of hard problems, but industry’s hard problems are different from the Army’s hard problems.” The Army doesn’t have the luxury of operating its robots in structured environments with lots of data, which is why ARL has put so much effort into APPL, and into maintaining a place for humans. Going forward, humans are likely to remain a key part of the autonomous framework that ARL is developing. “That’s what we’re trying to build with our robotics systems,” Stump says. “That’s our bumper sticker: ‘From tools to teammates.’ ”

This article appears in the October 2021 print issue as “Deep Learning Goes to Boot Camp.”

From Your Site Articles

Related Articles Around the Web

This Article was first live here.

Technology

Qualcomm unveils the Snapdragon 8 Plus Gen 1, says it will offer 10% faster CPU performance, 10% faster GPU clocks, and have up to 30% better power efficiency (Sean Hollister/The Verge)

Published

on


Sean Hollister / The Verge:

Qualcomm unveils the Snapdragon 8 Plus Gen 1, says it will offer 10% faster CPU performance, 10% faster GPU clocks, and have up to 30% better power efficiency  —  Bragging rights (and battery life?) for gaming phones  —  Qualcomm’s Snapdragon 8 Gen 1 set the stage for the biggest Android smartphones …

This Article was first live here.

Continue Reading

Technology

Geoff Keighley teases what’s to come at Summer Game Fest

Published

on

Placeholder while article actions load

Summer Game Fest is around the corner, and media entrepreneur Geoff Keighley hints at a month of news starting on June 9.

“First couple of weeks of June are going to be a good time for gamers as always,” Keighley said.

The host of the Game Awards and Summer Game Fest said people might look back at June as an exciting start to the year’s game release news, which has been on the quieter side when it comes to big titles. When asked whether that means people can expect major game announcements, Keighley demurred.

“June is definitely a good time for people to ramp up, get people excited about things coming in the future. So yes, there will be some good announcements. They’ll be good, meaningful updates on games,” Keighley said, adding that, for example, in 2021, the Summer Game Fest showed off gameplay of “Elden Ring,” a previously announced game that still drew a lot of interest. “Will you get everything you want? No. But I think there’ll be some good stuff this year.”

The 2022 gaming news event is mostly digital, though it will feature an in-person component. Imax movie theaters will air the Summer Game Fest in the U.S., Canada and United Kingdom starting on June 9, live from Los Angeles. Viewers can tune into the exact same show on Twitch. (Twitch is owned by Amazon, whose founder, Jeff Bezos, owns The Washington Post.)

While individual game companies will do their own events, as they have in past years, Keighley said he plans to organize things so that they don’t heavily overlap. In another major gaming showcase, Xbox will hold its live-streamed event on June 12.

The Game Awards: How Geoff Keighley helped create The Oscars for gaming

In light of the Russian invasion of Ukraine, Keighley said he has been in conversations with several Ukrainian studios whose game titles — such as GSC Game World’s “S.T.A.L.K.E.R.” — have been impacted.

“There have been a number of teams, honestly, that we were talking [with] about content for our show, that are in Ukraine, and they’ve had to relocate and can’t finish their trailer, can’t finish their game, because they’re in the middle of a situation,” Keighley said. “We’re conscious of those games and actively trying to think about what’s the right way to recognize some of those teams and the hardships that they’ve been through.”

Keighley made headlines in 2020, when he announced he was skipping E3 for the first time in 25 years, saying the event needed to evolve.

This year, Summer Game Fest will take place in the backdrop of another canceled E3, just as it did in 2020.

“You’ll find no bigger fan than me of what E3 represented to the industry. And I went to it for 25 years,” Keighley said. “I still think E3 needs to figure out its place in this new digital, global landscape. Game companies have figured out there are lots of great ways to program directly to fans. With Summer Game Fest, we’re very cognizant of that; we’re not just trying to be an E3 replacement. We’re doing something very different and approaching it as a free, digital-first celebration of games. The great thing is we can build it from the ground into something completely new. And we don’t have the baggage and legacy of trying to sell booze to people or hotel rooms.”

From 2021: For years, E3 has been gaming’s biggest event. Is that still true?

Keighley told The Post last December that the other event he hosts, the Game Awards, would take a “thoughtful, measured” approach toward non-fungible tokens (NFTs). For this year’s Summer Game Fest, Keighley similarly said he had no plans to have anything NFT or blockchain-related.

“Some people are like, ‘Oh Geoff, I see you following an NFT account on Twitter.’ And it’s like, I’m interested to learn about that stuff. But I’ve yet to see anything that really crosses over to content that would be accretive to the experience. Look, if I see a game or experience that I think is really going to be compelling and interesting and leverages those technologies in a meaningful way, we’ll of course look at it,” Keighley said.

As for whether Activision Blizzard, a company facing multiple lawsuits and government investigations, will be present at Summer Game Fest, Keighley said the situation was evolving. Activision Blizzard did not immediately respond to a request for comment.

“In the back of our minds, obviously, is the zeitgeist of what’s going on at both of these companies but also, in the community,” he said. “Everyone’s opinions continue to evolve among all these topics, so it’s hard to put a pin in something and say, ‘Hey, this is exactly how we’re going to treat this throughout the entire year.’ ”

Another hotly discussed industry topic is unionization. When asked whether organizing labor would impact Summer Game Fest, Keighley said, “Trying to make our show is ultimately to support creators of games and let them showcase their work. I hope we empower game creators, through our shows, to reach audiences and feel like they can reach those audiences directly.”

This Article was first live here.

Continue Reading

Technology

Why I Prefer an eReader to a Real Book

Published

on

dean bertoncelj / Shutterstock.com

Okay, I’m really doing this. Ahem. I prefer eReaders to real books. Now, before you report me to your local library for crimes against literature, let me explain. Maybe you’ll hear a new reason to give eReaders a chance.

This is a pretty touchy subject among readers. People who prefer physical books are often very passionate about that. It feels like people who like eReaders are the ones who have to defend their position. So allow me to defend my position.

RELATED: How to Borrow eBooks from a Library on a Kindle for Free

Temporary Is Okay

kindle in case on canvas bag with phone and sunglasses

First, let me start by saying I am not anti-physical books. I love real books. I love looking at cover art and I love the feel of a physical book in my hands. I think eReaders and books can peacefully co-exist.

My perspective on the place for eReaders vs real books is similar to how I view other forms of media. I might see a movie on Netflix that looks interesting and only watch it once. I don’t need to own a physical copy of it. Some things can be temporary.

Now, if there’s a movie or an album of music that’s important to me, then I want the physical copy. That’s the same philosophy I have toward books. There are so many books that I’ve read once and haven’t thought much about since. Owning the physical copy would just be adding clutter to my home.

Important, meaningful books are the ones I want to have in my possession forever. For everything else, the temporary feeling of an eBook makes an eReader the perfect option.

RELATED: How to Delete Books and Documents from Your Kindle Library

Streaming Books

There are very literal “streaming” services for books—such as Amazon Prime Reading—but even the general experience of an eReader is similar in a lot of ways to how we use streaming services.

Streaming services are great for browsing and easily switching between media. One day you’re in the mood for comedy, the next it’s drama. When you finally find something to get into, you can easily jump right into it every day until you’ve finished it.

That’s what I like about eReaders. I can easily browse through my library and decide what I’m in the mood to read. If a book doesn’t grab my attention quickly enough, I can easily switch to something else. I don’t have to bring a stack of books to my bed.

Physical media is a much more deliberate experience. Choosing a Blu-ray and putting it on is a commitment. Taking one book to the couch is a commitment. eReaders give you flexibility.

RELATED: How to Download Free eBooks with Amazon Prime

More Books, Less Weight

That flexibility also comes with some real-world benefits. Books are heavy, there’s no getting around that. If you want to take multiple books somewhere, you’re going to be carrying a lot of weight around.

The average eReader—such as the Kindle Paperwhite—can hold around 1,000 books per gigabyte of storage. That’s a lot. You can essentially go on vacation with your entire library of books in a device that weighs less than 8 ounces.

It’s really hard to overstate just how amazing that is. People love phones and streaming services for this same reason when it comes to music. It’s awesome to have all your music with you all the time. So what’s wrong with doing the same with your books?

Best eReader Overall

Kindle Paperwhite

The Kindle Paperwhite Signature Edition improves on the previous generation with more storage space, USB-C charging, and an adjustable warm light.

Can We All Get Along?

There are a lot of things to get passionate about in life, especially when it comes to technology. iPhone vs Android. Windows vs Mac. Overused fonts. How to say GIF. I don’t think the eReader vs. real books debate needs to be one of them.

Nothing is ever going to replace physical books. Streaming music services are extremely popular, yet music is still being released on CDs and vinyl records. Movies still come out on Bluray and DVD. eReaders have been around for a long time and real books are still here.

An eReader is a handy device that every avid reader should consider. You don’t have to stop reading physical books, but you’ll appreciate the convenience in many situations.

This Article was first live here.

Continue Reading

Trending