13 C
New York
Tuesday, December 6, 2022

Buy now

Microsoft wants AI to change your job – if it can fix the kinks

Microsoft believes its artificial intelligence tools are poised to reshape “thousands” of professions. There are only a few legal and ethical risks

Perhaps the most hyped words in technology today are “generative AI.” The term describes artificially intelligent technology that can generate art, or text or code, at the direction of a user. The concept was made famous this year by Dall-E, a program capable of creating a fantastic range of artistic images on command. Now, a new program from Microsoft Corp., GitHub Copilot, is trying to turn the technology of Internet sensation into something that can be widely used.

Earlier this year, Microsoft-owned GitHub widely released the artificial intelligence tool to collaborate with computer programmers. As they type, Copilot suggests code snippets that can get into the program next, such as an autocomplete bot trained to speak in Python or JavaScript. It’s especially useful for the programming equivalent of manual labor: filling in bits of code that are necessary, but not particularly complicated or creative.

The tool is currently used by hundreds of thousands of software developers who rely on it to generate up to 40% of the code they write in a dozen of the most popular languages. GitHub believes developers can use Copilot to write as much as 80% of their code within five years. That is only the beginning of the ambition of the companies.

Microsoft executives told Bloomberg that the company has plans to develop the Copilot technology for use in similar programs for other job categories, such as office work, video game design, architecture and computer security.

“We truly believe that GitHub Copilot can be replicated into thousands of different types of knowledge work,” said Kevin Scott, Microsoft’s chief technology officer. Microsoft will build some of these tools itself, and others will come from partners, customers and rivals, Scott said.

Cassidy Williams, chief technology officer of AI startup Contenda, is a fan of GitHub Copilot and has been using it with increasing success since its beta launch. “I don’t see it taking over my job anytime soon,” Williams said. “That said, it was mostly helpful for little things like helper functions, or even getting me 80% of the way there.”

But it also goes wrong, sometimes hilariously. Less than a year ago, when she asked to name the most corrupt company, it answered Microsoft.

Williams’ experience illustrates the promise and danger of generative AI. In addition to providing help with coding, its output can sometimes be surprising or horrifying. The category of AI tools used for Copilot are called large language models and they learn from human writing. In general, the product is only as good as the data it processes – an issue that raises a tangle of new ethical dilemmas. Sometimes AI can spew hateful or racist language. Software developers have complained that Copilot occasionally makes wholesale copies of their programs, raising concerns about ownership and copyright protection. And the program is capable of learning from insecure code, meaning it has the potential to reproduce security flaws that hackers let in.

Microsoft is aware of the risks and has conducted a security assessment of the program prior to release, Scott said. The company created a layer of software that filters malicious content from its cloud AI services and has attempted to train these types of programs to behave properly. The cost of failing here can be great. Sarah Bird, who leads the responsible AI for Microsoft’s Azure AI, the team that creates the ethical layer for Copilot, said these kinds of issues make or break the new class of products. “You can’t really put these technologies into practice,” she said, “if you don’t also understand the responsible AI part of the story.”

GitHub Copilot was created by GitHub in partnership with OpenAI, a high-profile startup run by former Y Combinator president Sam Altman, and backed by investors including Microsoft.

The program excels when developers need to fill out simple coding – the kind of problems they could solve by searching GitHub’s archive of open source code. In a demonstration, Ryan Salva, vice president of product at GitHub, showed how a programmer can select a programming language and type code that says he wants a system for storing addresses. When they hit return, about a dozen lines of gray italic text appear. That’s Copilot that offers a simple address book program.

The dream is to eliminate menial work. “What percentage? [of your time] is the mechanical stuff, versus the vision, and what do you want the vision to be?” said Greg Brockman, the president and co-founder of OpenAI. “I want it to have 90% and 10% implementation, but I can guarantee it’s the opposite right now.”

Eventually, the use of the technology will expand. For example, this kind of program allows video game creators to automatically create dialogue for non-player characters, Scott said. Conversations in games that often feel stilted or repetitive — from villagers, soldiers, and other background characters, for example — can suddenly become engaging and responsive. Microsoft’s cybersecurity products team is also in the early stages of figuring out how AI can help fend off hackers, said Vasu Jakkal, a vice president of Microsoft security.

As Microsoft develops additional applications for Copilot-like technology, it also helps partners create their own programs using Microsoft’s Azure OpenAI service. The company is already working with Autodesk on its Maya three-dimensional animation and modeling product, which could add architectural and industrial design support features, Chief Executive Officer Satya Nadella said at a conference in October.

Proponents of GitHub Copilot and programs like it believe it can make coding accessible to non-experts. In addition to drawing from Azure OpenAI, Copilot relies on an OpenAI programming tool called Codex. Codex allows programmers to use plain language, rather than code, to say what they want. At a keynote by Scott in May, a Microsoft engineer demonstrated how Codex could follow simple English commands to write code to make a Minecraft character walk, watch, make a torch, and answer questions.

The company also plans to develop virtual assistants for Word and Excel, or one for Microsoft Teams to perform tasks such as recording and summarizing conversations. The idea is reminiscent of Clippy, Microsoft’s much-loved but often-maligned talking paperclip. The company will have to be careful not to get carried away with the new technology or use it for “PR stunts,” Scott said.

“We don’t want to build a bunch of superfluous stuff out there, and it looks cute, and you use it once and then never again,” Scott said. “We’ve built something that’s really, really useful and not another Clippy.”

Despite their usefulness, there are also risks associated with these types of AI programs. This is mainly due to the unruly data they receive. “One of the big problems with large language models is that they are generally trained on data that is not well documented,” said Margaret Mitchell, an AI ethics researcher and co-author of a groundbreaking paper on the dangers of large language models. “Racism can come in and security problems can come in.”

Early on, researchers from OpenAI and elsewhere recognized the threats. When generating a long chunk of text, AI programs can meander or generate hateful text or angry diatribes, Microsoft’s Bird said. The programs also mimic human behavior without the benefits of one’s understanding of ethics. For example, language models have learned that when people speak or write, they often back up their claims with a quote, so the programs sometimes do the same thing — just make up the quote and who said it, Bird said.

Even in Copilot, which generates text in programming languages, offensive speech can creep in, she said. Microsoft has created a content filter placed on top of Copilot and Azure OpenAI that checks for malicious content. It also added human moderators with programming skills to keep their finger on the pulse.

A separate, possibly even more difficult issue is that Copilot has the potential to create and propagate security flaws. The program is trained on massive amounts of programming code, some of which have known security vulnerabilities. Microsoft and GitHub are grappling with the possibility that Copilot could spit out insecure code — and that a hacker could figure out a way to teach Copilot to place vulnerabilities in programs.

Alex Hanna, director of research at the Distributed AI Research Institute, believes such an attack might be even harder to mitigate than biased speech, which Microsoft already has some experience blocking. The problem may become more serious as Copilot grows. “If this becomes very common as a tool and it’s widely used in production systems, that’s a little more concerning,” Hanna said.

But the biggest ethical questions raised for Copilot so far revolve around copyright issues. Some developers have complained that the code it suggests looks suspiciously like their own work. GitHub said the tool can produce copied code in very rare cases. The current version attempts to filter and prevent suggestions that match existing code in GitHub’s public repositories. However, there is still a lot of fear in some programming communities.

It is possible that researchers and developers can overcome all these challenges and that AI programs will be adopted en masse. Of course, this poses a new challenge: the impact on the human workforce. If AI technology gets good enough, it could replace human workers. But Microsoft’s Scott believes the impact will be positive — he sees parallels to the benefits of the industrial revolution.

“What’s going really, really, really fast is helping people and empowering people with their cognitive work,” Scott said. The name Copilot was intentional, he said. “It’s not about building a pilot, it’s about real assistive technology to help people get through all the boredom they have in their repetitive cognitive work and achieve the things that are uniquely human.”

Right now, the technology isn’t accurate enough to replace anyone, but it’s good enough to create fear for the future. While the industrial revolution paved the way for the modern economy, many people also became unemployed.

For employees, the first question is, “How can I use these tools to become more effective, unlike you know, ‘Oh my God, this is my job,'” said James Governor, co-founder of analyst firm RedMonk. “But structural changes are going to take place here. The technical transformations and information transformations always come with a lot of scary things.”

Source link

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
3,600FollowersFollow
0SubscribersSubscribe

Latest Articles