One of the most memorable moments of Matthew Taylor’s life so far would look to most people like just a jumble of numbers, brackets, and punctuation strung together with random words on a computer screen.

IF ((dist(K1,T1)<=4) AND
(Min(dist(K3,T1), dist(K3,T2))>=12.8) AND
(ang(K3,K1,T1)>=36))
THEN Pass to K3

And so on. Line after line of computer code flowing like a digital river of expanding possibilities.

Although sophisticated and wonderfully complex, it wasn’t so much the code itself that made this such a pivotal moment.

It was what came next.

Taylor, a graduate student in Texas at the time, used the custom coding along with an otherwise basic computer soccer game to prove what many previously had only theorized about the development of artificial intelligence.

Robots, when given a series of increasingly difficult challenges to solve, will learn new tasks faster.

It’s known as transfer learning and while Taylor’s doctoral thesis used simulated soccer scenarios as a virtual world test lab, the underlying algorithms behind those experiments have significantly broader potential and the scientific world quickly took notice.

“When we were putting it together, we kept thinking, of course this should work, because it’s so obvious,” Taylor, now an assistant professor of computer science and director of Washington State University’s Intelligent Robot Learning Laboratory, recalls of the conversations he had with his graduate adviser. “But then, when it did work, we were like, ‘Yes!’”

Nearly a decade later, Taylor is still pioneering breakthroughs in machine learning.

The mathematical algorithms he and his WSU students are developing enable robots to learn increasingly complex tasks by interacting with humans and with each other—whether it’s helping the elderly safely remain in their own homes longer or protecting Washington’s fruit orchards from hungry birds.

Taylor’s goal is to bring it full circle, creating environments where humans and machines can teach and learn together.

“In order for that to happen, we have to develop ways that people can interact with machines without needing to be computer scientists,” he explains. “This has to be something that nonexperts can do…so that you don’t need a programmer every time you want a machine to do something different.”

Long viewed as little more than a staple of science fiction, artificial intelligence has stormed into the mainstream.

Autonomous delivery drones and self-driving cars. Chatbots that can diagnose medical conditions or help motorists fight parking tickets. Machines that can interpret language, read paper documents, and instantly search massive databases for otherwise elusive patterns.

A technological revolution is underway that, depending on who you ask, is going to be either the pinnacle of human endeavor or the cause of societal collapse. Some of the biggest names in the world of science can be found on both sides of that debate.

Although WSU’s focus is largely on assistive AI, the ongoing development of robotic labor able to perform increasingly complex tasks faster and more efficiently than humans will test the fabric of society in the years ahead.

Advances in automated transportation, for example, mean most children born today may never learn to drive a car. And that’s just the start.

 

Back in the 1980s, Antonie Bodley was a youngster with a new friend.

Teddy Ruxpin was a robotic, storytelling teddy bear that never grew cranky or impatient. Its eyes blinked. Its mouth moved. And the bestselling toy of 1985 engaged kids with stories, songs, and comforting pronouncements about friendship and camaraderie.

It had a profound impact on Bodley, who quickly became more interested in how Teddy Ruxpin worked than in how all of those stories would end. She peeled back the layer of fur and peered inside to find the tangle of wires, tiny servo motors, and speakers that, along with a fresh set of batteries, could bring her animatronic friend to life.

Bodley, now 34, knows that’s where her fascination with robotics and artificial intelligence began to take root.

“Also, the character Data from Star Trek, which I discovered a little while later,” she says with a laugh. “I became really interested in how we, with science fiction, were able to better understand humanity through the experiences of an artificial robot.”

Bodley found herself being drawn beyond the technological side of artificial intelligence, instead pondering societal questions that are likely to emerge as machines continue to evolve. She spent several years exploring those issues, eventually turning them into the focus of an interdisciplinary doctoral degree, which she received from WSU in 2015.

“If we build machines that can learn and can react and can develop, then what have we really created?” she asks. “This, eventually, is going to become a very important question.”

Not just so we can better understand what AI represents but, as Bodley and others explain, the basic role and purpose of humanity itself could become more ambiguous as machines outperform humans in ever-expanding ways.

To help guide her graduate research, Bodley pored through the classic works of science fiction masters such as Isaac Asimov, Arthur C. Clarke, and Robert A. Heinlein—futurists whose novels relied heavily on hard science. Many of the concepts and principles outlined in those Golden Age novels, specifically Asimov’s “Three Laws of Robotics,” have since been embraced by the scientific community and adopted into current AI research.

She also examined what’s known as soft sci-fi—futuristic tales loosely inspired by science but focused more on fantastical depictions of advanced universes. Soft science tends to put greater emphasis on those what-if scenarios that can reshape entire societies, which is where Bodley’s interests have continued to migrate.

She sees a reckoning on the horizon: “We’re going to need continued discussion and active dialogue between futurists and the rest of society as AI continues to progress, because we do run the risk of irrevocably damaging the current structures of society.”

Bodley, however, is not among those pushing the panic button.

She acknowledges the transition could be economically disruptive, perhaps even painful, for many as unemployment grows. It likely will be emotionally difficult as well, because careers often are as much about personal identity or purpose as a means of providing necessary household income.

Bodley instead believes the pursuit of AI represents a logical progression of an important and distinctly human trait—innovation.

“We’re already seeing what’s known as weak AI in assistive roles in the workplace and throughout society, everything from how we can ask Siri for directions or let Netflix pick the next program based on our previous viewing choices,” she says. “Strong AI takes it to the next level—machines that, essentially, think.

“That’s where I believe AI will begin to challenge the framework of what constitutes humanity.”

 

Before leaving the White House, former President Barack Obama commissioned a study into how the nation should best prepare for what the rapid advances in AI will mean to our way of life.

The panel ended up issuing two separate reports, including one that focused exclusively on the potentially massive job losses ahead as computers and machines learn to perform increasingly sophisticated tasks.

“These transformations will open up new opportunities for individuals, the economy, and society, but they have the potential to disrupt the current livelihoods of millions of Americans,” the report warns. “Whether AI leads to unemployment and increases in inequality over the long-run depends not only on the technology itself but also on the institutions and policies that are in place.”

Historically, technological advances at various levels have contributed to job loss but, over time, the increased productivity and shifts in training to fill newly created needs have more than overcome those initially painful setbacks. The economy grew and lives improved.

The effects of rapidly developing AI already are being felt in the workplace but the major disruptions likely are still 50 to 65 years away, despite some studies suggesting it will come much quicker. Researchers at the London-based McKinsey Global Institute predict a massive but gradual loss of employment.

“Even when the technical potential exists, we estimate it will take years for automation’s effect on current work activities to play out fully,” the institute’s research team wrote in January 2017. “The pace of automation, and thus its impact on workers, will vary across different activities, occupations, and wage and skill levels.”

One of the primary obstacles to rapid transition is the high cost of acquiring AI-equipped machinery. For example, technology that would enable commercial hauling to be turned over to self-driving trucks already is being fine-tuned but few, if any, companies have the capital readily available to immediately replace their entire fleets with new high-tech trucks.

As those transitions occur, AI is more likely to be used at least initially to take over certain tasks or aspects of a given job rather than the entire job itself. It means human workers will still be needed but in smaller numbers.

Although low-skilled, repetitive jobs are widely considered the first to be replaced, advances in what’s known as natural language recognition—the ability of computers to interpret conversations and read paper documents—are putting solidly middle-class careers at greater risk as well.

Many functions within the banking, legal, and accounting professions are seen as vulnerable, while certain medical and even journalism skills could be performed by machines as well.

“We estimate that about half of all the activities people are paid to do in the world’s workforce could potentially be automated by adapting currently demonstrated technologies,” according to McKinsey. “That amounts to almost $15 trillion in wages.”

At WSU, the decision to put greater emphasis on artificial intelligence was as much about practicality as pushing the boundaries of science.

“With the economic downturn (back around 2007 to 2010), and the strained budgets that followed, everyone was having to make careful choices about how best to use what we had available,” explains Behrooz Shirazi, who served as director of WSU’s School of Electrical Engineering and Computer Science at the time and now leads a new WSU initiative exploring AI applications for improving community health.

At the time, an advisory board consisting of tech industry leaders had identified machine learning, which is a crucial subset of artificial intelligence, as an emerging field. WSU already had two widely recognized experts on faculty, Diane Cook and Larry Holder, a husband-and-wife computer science team that Shirazi had brought to Pullman with him from the University of Texas at Arlington.

“Diane already was doing great things with smart technology,” Shirazi says, pointing specifically to her development of in-home AI that blends machine learning with pervasive computing to provide remote health monitoring and intervention. “Then we found Matt Taylor and that was like this big bonus.”

 

From her office across the hall from WSU’s Intelligent Robot Learning Laboratory, computer scientist and WSU doctoral student Bei Peng is experimenting with ways to simplify and improve interactions between humans and advanced machines.

“The goal is to combine AI systems with human intelligence,” says Peng, a Chinese scholar who worked as a programmer before enrolling at WSU to study with Taylor. “It helps machines learn quicker.”

That’s a key part of AI research at WSU. Another is developing systems that anyone—not just computer scientists—can use.

Peng is trying to do both and her experiments already are drawing notice.

She started with a time-tested animal training technique, essentially a “good dog, bad dog” approach to reinforcement learning that is intended to help speed up the pace.

Using a virtual floor plan, a machine—typically referred to as an agent by researchers and represented as an icon on the computer screen—is given a series of basic but increasingly complex tasks to complete. Human operators provide feedback, a +1 for each correct move or choice and a -1 for each blunder.

A typical task would be something like: Move the chair to the blue room.

The machine has to learn to navigate the floor plan, distinguish a chair from other objects, and differentiate colors. Those lessons, however, carry over from one scenario to the next so subsequent challenges are accomplished faster as everything from basic colors, object identification, and floor plan layouts are learned.

Think of it as a kind of digital, twenty-first-century version of Flowers for Algernon.

The research team, however, wasn’t done.

In order to make it interactive, the communication had to go both ways.

So, the machines were given the ability to adjust the speed of their movement through the floor plan based on the confidence level of the actions they take. Slow movement, for example, serves as a visual cue for uncertainty, telling humans the machine wants guidance.

“What we’re trying to understand better is how humans can teach robots,” Peng says.

Although much of the AI research being done now is in virtual settings, the underlying algorithms being developed can be used in hardware that operates in the physical world. One of the most visible examples is within the auto industry, where complex computer code is what enables the real-time data processing needed for driverless vehicles to operate autonomously.

 

Taylor, the robot lab director at WSU, believes the era of easily programmable consumer AI is near.

“Where I want to see this go is getting robots that can learn from humans into homes,” he says, explaining there’s a significant difference between the hard-coded AI limited to pre-defined tasks and autonomous machines that can learn and adapt to various needs. “We’re seeing more and more AI in the home already but it’s pretty much all preprogrammed. We need to take it to the next step.”

Taylor, an Allred Distinguished Professor, also is working on commercial applications, including autonomous drones that work together to protect fruit orchards by chasing away flocks of hungry birds. Half of Washington’s growers identify birds as a significant contributor to crop loss.

The project is intriguing to Taylor because it involves developing ways for drones to communicate with each other to share information and coordinate an effective response. Simply designing a grid pattern atop an orchard for a drone to continuously follow would quickly become ineffective because birds can spot patterns and exploit them.

“This has to be something that’s done autonomously,” he says. “If you have to have someone out there controlling the drones, then it isn’t really an advantage.”

Last fall, Taylor was invited to Microsoft headquarters in Redmond to talk with the company’s researchers about his work and his belief that the development of machines that can be programmed or trained by anyone is key to AI’s expansion.

He described what he sees as a necessary cooperative approach.

“I’m really interested in techniques that work with normal humans,” he told the roomful of research scientists, drawing a rumbling chuckle from the audience. “A lot of our research is computer geeks teaching robots or teaching agents how to do things. And the problem is, we already know how the machine-learning algorithms work.

“So, ideally, however we’re doing this teaching it would work for non-expert humans—people who don’t understand AI.”

Taylor also considers it important for machines to be able to pass along learned lessons to other machines so people won’t have to start from scratch whenever they get an upgraded version.

That’s a concept he knows might frighten many people, but he downplays any concern.

He sees artificial intelligence as improving lives and aiding independence, particularly as people age.

And while he, like others, acknowledges the potential for workplace disruptions, he doesn’t consider that to necessarily be a bad thing—provided it’s the right jobs being taken over. Taylor says AI tech is particularly well-suited for the jobs that tend to be “dirty, dangerous, and dull,” noting those are the ones people typically don’t want, anyway.

Bodley, meanwhile, has given thought to what the world might be like where machines do nearly everything faster and more efficiently.

The transition will, of course, be difficult for some, she acknowledges, and fears about an identity crisis for the human race likely aren’t exaggerated.

But she finds comfort in the wisdom of philosophers who long ago sought to define what it meant to be human.

“I’d like to think that instead of disorder, society reaches a place where individuals could use the time that used to be consumed by the demands of daily work schedules and careers to focus on what actually makes us human,” she says. “That’s the ability to create, to innovate, and to love. Perhaps even love our robotic companions.”

Web extra

 
Robot typingRobowriters — Sports and business reports, courtesy of artificial intelligence