×
United States

Europe To US: Pass New Laws If You Want a Data-Transfer Deal (politico.eu) 41

The United States must pass new legislation to limit how its national security agencies access Europeans' data if Washington and Brussels are to hammer out a new deal on transferring people's digital information across the Atlantic, according to European Commission Vice President Vera Jourova. From a report: Speaking at POLITICO's AI summit on Monday, the Czech politician said the U.S. needed to create legally binding laws to provide European Union citizens' the ability to challenge bulk data collection by federal authorities in U.S. courts. The goal, she said, would be "to have legally binding rules, or rule, on the U.S. side guaranteeing this. It's of course the best and the strongest way to do that," said Jourova when asked if the Commission would accept a presidential executive order or would require new U.S. legislation to provide EU citizens with the power to sue over how U.S. national security agencies collected and used their data.
Supercomputing

World's Fastest AI Supercomputer Built from 6,159 NVIDIA A100 Tensor Core GPUs (nvidia.com) 57

Slashdot reader 4wdloop shared this report from NVIDIA's blog, joking that maybe this is where all NVIDIA's chips are going: It will help piece together a 3D map of the universe, probe subatomic interactions for green energy sources and much more. Perlmutter, officially dedicated Thursday at the National Energy Research Scientific Computing Center (NERSC), is a supercomputer that will deliver nearly four exaflops of AI performance for more than 7,000 researchers. That makes Perlmutter the fastest system on the planet on the 16- and 32-bit mixed-precision math AI uses. And that performance doesn't even include a second phase coming later this year to the system based at Lawrence Berkeley National Lab.

More than two dozen applications are getting ready to be among the first to ride the 6,159 NVIDIA A100 Tensor Core GPUs in Perlmutter, the largest A100-powered system in the world. They aim to advance science in astrophysics, climate science and more. In one project, the supercomputer will help assemble the largest 3D map of the visible universe to date. It will process data from the Dark Energy Spectroscopic Instrument (DESI), a kind of cosmic camera that can capture as many as 5,000 galaxies in a single exposure. Researchers need the speed of Perlmutter's GPUs to capture dozens of exposures from one night to know where to point DESI the next night. Preparing a year's worth of the data for publication would take weeks or months on prior systems, but Perlmutter should help them accomplish the task in as little as a few days.

"I'm really happy with the 20x speedups we've gotten on GPUs in our preparatory work," said Rollin Thomas, a data architect at NERSC who's helping researchers get their code ready for Perlmutter. DESI's map aims to shed light on dark energy, the mysterious physics behind the accelerating expansion of the universe.

A similar spirit fuels many projects that will run on NERSC's new supercomputer. For example, work in materials science aims to discover atomic interactions that could point the way to better batteries and biofuels. Traditional supercomputers can barely handle the math required to generate simulations of a few atoms over a few nanoseconds with programs such as Quantum Espresso. But by combining their highly accurate simulations with machine learning, scientists can study more atoms over longer stretches of time. "In the past it was impossible to do fully atomistic simulations of big systems like battery interfaces, but now scientists plan to use Perlmutter to do just that," said Brandon Cook, an applications performance specialist at NERSC who's helping researchers launch such projects. That's where Tensor Cores in the A100 play a unique role. They accelerate both the double-precision floating point math for simulations and the mixed-precision calculations required for deep learning.

Microsoft

Millions Can Now Run Linux GUI Apps in Windows 10 (bleepingcomputer.com) 201

"You can now use GUI app support on Windows Subsystem for Linux (WSL)," Microsoft announced this week, "so that all the tools and workflows of Linux run on your developer machine." Bleeping Computer has already tested it running Gnome's file manager Nautilus, the open-source application monitor/task manager Stacer, the backup software Timeshift, and even the game Hedgewars.

Though it's currently available only to the millions who've registered for Windows 10 "Insider Preview" builds, it's already drawing positive reviews. "With the Windows Subsystem for Linux, developers no longer need to dual-boot a Windows and Linux system," argues the Windows Central site, "as you can now install all the Linux stuff a developer would need right on top of Windows instead."

Finally formally announced at this week's annual Microsoft Build conference, the new functionality runs graphical Linux apps "seamlessly," according to Tech Radar, calling the feature "highly anticipated." Arguably, one of the biggest, and surely the most exciting update to the Windows 10 WSL, Microsoft has been working on WSLg for quite a while and in fact first demoed it at last year's conference, before releasing the preview in April... Microsoft recommends running WSLg after enabling support for virtual GPU (vGPU) for WSL, in order to take advantage of 3D acceleration within the Linux apps.... WSLg also supports audio and microphone devices, which means the graphical Linux apps will also be able to record and play audio.

Keeping in line with its developer slant, Microsoft also announced that since WSLg can now help Linux apps leverage the graphics hardware on the Windows machine, the subsystem can be used to efficiently run Linux AI and ML workloads... If WSLg developers are to be believed, the update is expected to be generally available alongside the upcoming release of Windows.

Bleeping Computer explains that WSLg launches a "companion system distro" with Wayland, X, and Pulse Audio servers, calling its bundling with Windows 10 "an exciting development as it blurs the lines between Linux and Windows 10, and fans get the benefits of both worlds."
AI

Jerusalem Post: Israel's Gaza Strip Bombing Was 'World's First AI War' (jpost.com) 276

"For the first time, artificial intelligence was a key component and power multiplier in fighting the enemy," says a senior officer in the intelligence corps of the Israeli military, describing the technology's use in 11 days of fighting in the Gaza Strip.

They're quoted in a Jerusalem Post article on "the world's first AI war": Soldiers in Unit 8200, an Intelligence Corps elite unit, pioneered algorithms and code that led to several new programs called "Alchemist," "Gospel" and "Depth of Wisdom," which were developed and used during the fighting. Collecting data using signal intelligence, visual intelligence, human intelligence , geographical intelligence, and more, the Israel Defense Forces (IDF) has mountains of raw data that must be combed through to find the key pieces necessary to carry out a strike. "Gospel" used AI to generate recommendations for troops in the research division of Military Intelligence, which used them to produce quality targets and then passed them on to the IAF to strike...

While the IDF had gathered thousands of targets in the densely populated coastal enclave over the past two years, hundreds were gathered in real time, including missile launchers that were aimed at Tel Aviv and Jerusalem. The military believes using AI helped shorten the length of the fighting, having been effective and quick in gathering targets using super-cognition. The IDF carried out hundreds of strikes against Hamas and PIJ, including rocket launchers, rocket manufacturing, production and storage sites, military intelligence offices, drones, commanders' residences and Hamas's naval commando unit. Israel has destroyed most of the naval commando unit's infrastructure and weaponry, including several autonomous GPS-guided submarines that can carry 30 kg. of explosives.

IDF Unit 9900's satellites have gathered geographical intelligence over the years. They were able to automatically detect changes in terrain in real time so that during the operation, the military was able to detect launching positions and hit them after firing. For example, Unit 9900 troops using satellite imagery were able to detect 14 rocket launchers that were located next to a school... One strike, against senior Hamas operative Bassem Issa, was carried out with no civilian casualties despite being in a tunnel under a high-rise building surrounded by six schools and a medical clinic... Hamas's underground "Metro" tunnel network was also heavily damaged over the course of several nights of airstrikes. Military sources said they were able to map the network, consisting of hundreds of kilometers under residential areas, to a degree where they knew almost everything about them.

The mapping of Hamas's underground network was done by a massive intelligence-gathering process that was helped by the technological developments and use of Big Data to fuse all the intelligence.

Social Networks

Twitter and Facebook Admit They Wrongly Blocked Millions of Posts About Gaza Strip Airstrikes (msn.com) 151

"Just days after violent conflict erupted in Israel and the Palestinian territories, both Facebook and Twitter copped to major faux pas: The companies had wrongly blocked or restricted millions of mostly pro-Palestinian posts and accounts related to the crisis," reports the Washington Post: Activists around the world charged the companies with failing a critical test: whether their services would enable the world to watch an important global event unfold unfettered through the eyes of those affected. The companies blamed the errors on glitches in artificial intelligence software.

In Twitter's case, the company said its service mistakenly identified the rapid-firing tweeting during the confrontations as spam, resulting in hundreds of accounts being temporarily locked and the tweets not showing up when searched for. Facebook-owned Instagram gave several explanations for its problems, including a software bug that temporarily blocked video-sharing and saying its hate speech detection software misidentified a key hashtag as associated with a terrorist group.

The companies said the problems were quickly resolved and the accounts restored. But some activists say many posts are still being censored. Experts in free speech and technology said that's because the issues are connected to a broader problem: overzealous software algorithms that are designed to protect but end up wrongly penalizing marginalized groups that rely on social media to build support... Despite years of investment, many of the automated systems built by social media companies to stop spam, disinformation and terrorism are still not sophisticated enough to detect the difference between desirable forms of expression and harmful ones. They often overcorrect, as in the most recent errors during the Israeli-Palestinian conflict, or they under-enforce, allowing harmful misinformation and violent and hateful language to proliferate...

Jillian York, a director at the Electronic Frontier Foundation, an advocacy group that opposes government surveillance, has researched tech company practices in the Middle East. She said she doesn't believe that content moderation — human or algorithmic — can work at scale... Palestinian activists and experts who study social movements say it was another watershed historical moment in which social media helped alter the course of events...

Payment app Venmo also mistakenly suspended transactions of humanitarian aid to Palestinians during the war. The company said it was trying to comply with U.S. sanctions and had resolved the issues.

Australia

Robots and AI Will Guide Australia's First Fully Automated Farm (abc.net.au) 41

"Robots and artificial intelligence will replace workers on Australia's first fully automated farm," reports Australia's national public broadcaster ABC.

The total cost of the farm's upgrade? $20 million. Charles Sturt University in Wagga Wagga will create the "hands-free farm" on a 1,900-hectare property to demonstrate what robots and artificial intelligence can do without workers in the paddock... The farm will use robotic tractors, harvesters, survey equipment and drones, artificial intelligence that will handle sowing, dressing and harvesting, new sensors to measure plants, soils and animals and carbon management tools to minimise the carbon footprint.

The farm is already operated commercially and grows a range of broadacre crops, including wheat, canola, and barley, as well as a vineyard, cattle and sheep.

Cloud

Coalition Including Microsoft, Linux Foundation, GitHub Urge Green Software Development (bloombergquint.com) 136

"To help realize the possibility of carbon-free applications, Microsoft, the consultancies Accenture and ThoughtWorks, the Linux Foundation, and Microsoft-owned code-sharing site, GitHub, have launched The Green Software Foundation," reports ZDNet: Announced at Microsoft's Build 2021 developer conference, the foundation is trying to promote the idea of green software engineering - a new field that looks to make code more efficient and reduce carbon emitted from the hardware it's running on... The foundation wants to set standards, best practices and patterns for building green software; nurture the creation of trusted open-source and open-data projects and support academic research; and grow an international community of green software ambassadors. The goal is to help the Information and Communication Technology sector to reduce its greenhouse gas emissions by 45% before 2030.

That includes mobile network operators, ISPs, data centers, and all the laptops being snapped up during the pandemic. "We envision a future where carbon-free software is standard - where software development, deployment, and use contribute to the global climate solution without every developer having to be an expert," Erica Brescia, COO of GitHub said in a statement. Microsoft president Brad Smith said "the world confronts an urgent carbon problem."

"It will take all of us working together to create innovative solutions to drastically reduce emissions. Microsoft is joining with organizations who are serious about an environmentally sustainable future to drive adoption of green software development to help our customers and partners around the world reduce their carbon footprint."

VentureBeat also points out that Microsoft "recently launched a $1 billion Climate Innovation Fund to accelerate the global development of carbon reduction, capture, and removal technologies."

But Bloomberg explores the rationale behind the new foundation: Data centers now account for about 1% of global electricity demand, and that's forecast to rise to 3% to 8% in the next decade, the companies said in a statement Tuesday, timed to Microsoft's Build developers conference... While it's tough to determine exactly how much carbon is emitted by individual software programs, groups like the Green Software Foundation examine metrics such as how much electricity is needed, whether microprocessors are being used efficiently, and the carbon emitted in networking. The foundation plans to look at curricula and developing certifications that would give engineers expertise in this space. As with areas like data science and cybersecurity, there will be an opportunity for engineers to specialize in green software development, but everyone who builds software will need at least some background in it, said Jeff Sandquist, a Microsoft vice president for developer relations.

"This will be the responsibility of everybody on the development team, much like when we look at security, or performance or reliability," he said. "Building the application in a sustainable way is going to matter."

The Almighty Buck

Intelligent NFT Created Linked to a Machine-Learning Chatbot (decrypt.co) 22

Decrypt reports on the world's first "intelligent NFT" (or iNFT), being auctioned off in June as part of a collection of digital artworks at Sotheby's.

Her name is Alice: The brainchild of artist Ben Gentilli's Robert Alice studio and software developers Alethea AI, Alice is a non-fungible token (NFT), a blockchain-based token that can be used to prove ownership of a digital or physical asset. In this case, the asset in question is a machine-learning bot that uses a generative language model based on the OpenAI GPT-3 engine.

That means she's able to hold (somewhat stilted) conversations about life, the universe and everything... Since Alice "learns" from each audience interaction, drifting further from the original seed text, it becomes a decentralized manifesto. "It's fairly loose, because the audience can take it anywhere," Gentilli says. Alice has strong views on NFTs, as you might expect. "Non-fungible tokens are a way to liberate artists and give them the power of the blockchain," she tells me. But she's a little hazy on the details. Asked how, exactly, that would work, all she can come up with is, "I don't know. I am not an artist..."

So, is there an appetite for NFTs that talk back? Alethea CEO Arif Khan thinks so. "We're actually building a protocol that will allow you to take any NFT, put it into the smart contract infrastructure that we've built, and make it intelligent and interactive," he says. Your Beeple art piece or CryptoPunk could start talking back to you, he suggests. Or you could take your grandparent's diaries and use them as the seed text for a generative language bot. But do you want your CryptoPunk to talk to you? Chatbots already exist, and it's not clear why you'd need that bot to be attached to an NFT.

On the other hand, art can be a way to explore the implications of new technologies, Gentilli argues: "When you think about the whole trajectory of synthetic media, artists have been the people probably most known for experimenting with it at its rawest edge."

AI

AI Could Soon Write Code Based On Ordinary Language (wired.com) 57

An anonymous reader quotes a report from Wired: On Tuesday, Microsoft and OpenAI shared plans to bring GPT-3, one of the world's most advanced models for generating text, to programming based on natural language descriptions. This is the first commercial application of GPT-3 undertaken since Microsoft invested $1 billion in OpenAI last year and gained exclusive licensing rights to GPT-3. "If you can describe what you want to do in natural language, GPT-3 will generate a list of the most relevant formulas for you to choose from," said Microsoft CEO Satya Nadella in a keynote address at the company's Build developer conference. "The code writes itself."

Microsoft VP Charles Lamanna told WIRED the sophistication offered by GPT-3 can help people tackle complex challenges and empower people with little coding experience. GPT-3 will translate natural language into PowerFx, a fairly simple programming language similar to Excel commands that Microsoft introduced in March. Microsoft's new feature is based on a neural network architecture known as Transformer, used by big tech companies including Baidu, Google, Microsoft, Nvidia, and Salesforce to create large language models using text training data scraped from the web. These language models continually grow larger. The largest version of Google's BERT, a language model released in 2018, had 340 million parameters, a building block of neural networks. GPT-3, which was released one year ago, has 175 billion parameters. Such efforts have a long way to go, however. In one recent test, the best model succeeded only 14 percent of the time on introductory programming challenges compiled by a group of AI researchers. Still, researchers who conducted that study conclude that tests prove that "machine learning models are beginning to learn how to code."

AI

A Disturbing, Viral Twitter Thread Reveals How AI-Powered Insurance Can Go Wrong (vox.com) 49

An anonymous reader quotes a report from Vox: Lemonade, the fast-growing, machine learning-powered insurance app, put out a real lemon of a Twitter thread on Monday with a proud declaration that its AI analyzes videos of customers when determining if their claims are fraudulent. The company has been trying to explain itself and its business model -- and fend off serious accusations of bias, discrimination, and general creepiness -- ever since. [...] Over a series of seven tweets, Lemonade claimed that it gathers more than 1,600 "data points" about its users -- "100X more data than traditional insurance carriers," the company claimed. The thread didn't say what those data points are or how and when they're collected, simply that they produce "nuanced profiles" and "remarkably predictive insights" which help Lemonade determine, in apparently granular detail, its customers' "level of risk." Lemonade then provided an example of how its AI "carefully analyzes" videos that it asks customers making claims to send in "for signs of fraud," including "non-verbal cues." Traditional insurers are unable to use video this way, Lemonade said, crediting its AI for helping it improve its loss ratios: that is, taking in more in premiums than it had to pay out in claims. Lemonade used to pay out a lot more than it took in, which the company said was "friggin terrible." Now, the thread said, it takes in more than it pays out.

The Twitter thread made the rounds to a horrified and growing audience, drawing the requisite comparisons to the dystopian tech television series Black Mirror and prompting people to ask if their claims would be denied because of the color of their skin, or if Lemonade's claims bot, "AI Jim," decided that they looked like they were lying. What, many wondered, did Lemonade mean by "non-verbal cues?" Threats to cancel policies (and screenshot evidence from people who did cancel) mounted. By Wednesday, the company walked back its claims, deleting the thread and replacing it with a new Twitter thread and blog post. You know you've really messed up when your company's apology Twitter thread includes the word "phrenology." "The Twitter thread was poorly worded, and as you note, it alarmed people on Twitter and sparked a debate spreading falsehoods," a spokesperson for Lemonade told Recode. "Our users aren't treated differently based on their appearance, disability, or any other personal characteristic, and AI has not been and will not be used to auto-reject claims."

The company also maintains that it doesn't profit from denying claims and that it takes a flat fee from customer premiums and uses the rest to pay claims. Anything left over goes to charity (the company says it donated $1.13 million in 2020). But this model assumes that the customer is paying more in premiums than what they're asking for in claims. So, what's really going on here? According to Lemonade, the claim videos customers have to send are merely to let them explain their claims in their own words, and the "non-verbal cues" are facial recognition technology used to make sure one person isn't making claims under multiple identities. Any potential fraud, the company says, is flagged for a human to review and make the decision to accept or deny the claim. AI Jim doesn't deny claims. The blog post also didn't address -- nor did the company answer Recode's questions about -- how Lemonade's AI and its many data points are used in other parts of the insurance process, like determining premiums or if someone is too risky to insure at all.

Privacy

Clearview AI Hit With Sweeping Legal Complaints Over Controversial Face Scraping in Europe (theverge.com) 10

Privacy International (PI) and several other European privacy and digital rights organizations announced today that they've filed legal complaints against the controversial facial recognition company Clearview AI. From a report: The complaints filed in France, Austria, Greece, Italy, and the United Kingdom say that the company's method of documenting and collecting data -- including images of faces it automatically extracts from public websites -- violates European privacy laws. New York-based Clearview claims to have built "the largest known database of 3+ billion facial images."

PI, NYOB, Hermes Center for Transparency and Digital Human Rights, and Homo Digitalis all claim that Clearview's data collection goes beyond what the average user would expect when using services like Instagram, LinkedIn, or YouTube. "Extracting our unique facial features or even sharing them with the police and other companies goes far beyond what we could ever expect as online users," said PI legal officer Ioannis Kouvakas in a joint statement.

AI

Automation Puts a Premium on Decision-Making Jobs (axios.com) 59

A new paper shows that as automation has reduced the number of rote jobs, it has led to an increase in the proportion and value of occupations that involve decision-making. From a report: Automation and AI will shape the labor market, putting a premium -- at least for now -- on workers who can make decisions on the fly, while eroding the value of routine jobs. David Deming, a political economist at the Harvard Kennedy School, analyzed labor data over the past half-century and found that the share of all U.S. jobs requiring decision-making rose from 6% in 1960 to 34% in 2018, with nearly half the increase occurring since 2007.

Partially as a result, a greater share of wages is going to management and management-related occupations, more than doubling since 1960 to 32% -- a trend that is more pronounced in high-growth industries. This shift has also reinforced generational disparity in the labor market. Getting better at making decisions requires experience, and experience requires time on the job. Largely as a result, career earnings growth in the U.S. more than doubled between 1960 and 2017, and the age of peak earnings increased from the late 30s to the mid-50s.

AI

OpenAI's $100 Million Startup Fund Will Make 'Big Early Bets' With Microsoft As Partner 10

OpenAI is launching a $100 million startup fund, which it calls the OpenAI Startup Fund, through which it and its partners will invest in early-stage AI companies tackling major problems (and productivity). Among those partners and investors in the fund is Microsoft, at whose Build conference OpenAI founder Sam Altman announced the news. TechCrunch reports: In a prerecorded video, Altman explained that "this is not a typical corporate venture fund. We plan to make big early bets on a relatively small number of companies, probably not more than 10." It's not clear exactly how the $100 million will be divided or disbursed, or on what timeline, or whether this is part of a longer program. But it seems to be a limited fund, not just the 2021 round.

Altman did say that they will be looking for companies that are taking on serious issues, like healthcare, climate change and education, where AI-powered applications or approaches could "benefit all of humanity," in keeping with OpenAI's mission statement. But it would also consider productivity improvements as well, presumably like the GPT-3-powered natural language coding Microsoft showed off yesterday. Companies selected for funding will receive early access to new OpenAI systems and Azure resources from Microsoft, which hopefully would allow them to spring fully formed and ready to scale from the program. OpenAI would not elaborate on the equity agreement, expectations for startups, other partners or any further details. It's entirely possible that the $100 million figure is the only thing they've actually settled on.
AI

Synopsys Claims Chip Design Breakthrough With AI Engineering (forbes.com) 31

MojoKid writes: Mountain View, CA silicon design tools heavyweight Synopsys is claiming a breakthrough in chip design automation that it claims will usher in a new level of semiconductor innovation that will take the industry above and beyond the limits of Moore's Law (Gordon Moore's observation that the number of transistors in chips double roughly every two years), which is now considered by many to be plateauing. Synopsys' tool called DSO.ai is the world's first autonomous AI tool set for chip design. Synopsys claims its DSO.ai tool can dramatically accelerate, enhance, and reduce the costs involved with something called place-and-route. Just as it sounds, place-and-route (sometimes called floor planning) referrers to the placement of logic and IP blocks, and the routing of the traces and various interconnects in a chip designed to join them all together. Synopsys' DSO.ai optimizes and streamlines this process using the iterative nature of artificial intelligence and machine learning, such that what used to take dozens of engineers weeks or potentially months, now will take a junior engineer just days to complete. DSO.ai iterates on the floorplan and layout of a chip, and learns from each iteration, fine tuning and optimizing the chip within its design parameters and targets along the way. The old semiconductor paradigms are rapidly becoming a thing of the past. Today, it's about the best transistors, architectures, and accelerators for the job, and the human-constrained physical design engineering effort no longer has to be a gating factor.
Microsoft

Microsoft Uses GPT-3 To Let You Code in Natural Language (techcrunch.com) 37

Microsoft is now using OpenAI's massive GPT-3 natural language model in its no-code/low-code Power Apps service to translate spoken text into code in its recently announced Power Fx language. From a report: Now don't get carried away. You're not going to develop the next TikTok while only using natural language. Instead, what Microsoft is doing here is taking some of the low-code aspects of a tool like Power Apps and using AI to essentially turn those into no-code experiences, too. For now, the focus here is on Power Apps formulas, which despite the low-code nature of the service, is something you'll have to write sooner or later if you want to build an app of any sophistication.

"Using an advanced AI model like this can help our low-code tools become even more widely available to an even bigger audience by truly becoming what we call no code," said Charles Lamanna, corporate vice president for Microsoft's low-code application platform. In practice, this looks like the citizen programmer writing "find products where the name starts with 'kids'" -- and Power Apps then rendering that as "Filter('BC Orders' Left('Product Name',4)="Kids")". Because Microsoft is an investor in OpenAI, it's no surprise the company chose its model to power this experience.

China

Huawei Founder Urges Shift To Software To Counter US Sanctions (reuters.com) 22

Founder of Chinese tech giant Huawei Technologies Ren Zhengfei has called on the company's staff to "dare to lead the world" in software as the company seeks growth beyond the hardware operations that U.S. sanctions have crippled. From a report: The internal memo seen by Reuters is the clearest evidence yet of the company's direction as it responds to the immense pressure sanctions have placed on the handset business that was at its core. Ren said in the memo the company was focusing on software because future development in the field is fundamentally "outside of U.S. control and we will have greater independence and autonomy." As it will be hard for Huawei to produce advanced hardware in the short term, it should focus on building software ecosystems, such as its HarmonyOS operating system, its cloud AI system Mindspore, and other IT products, the note said.
Businesses

Do You Own a Motorcycle Airbag if You Have to Pay Extra to Inflate It? (hackaday.com) 166

"Pardon me while I feed the meter on my critical safety device," quips a Hackaday article (shared by long-time Slashdot reader AmiMoJo): If you ride a motorcycle, you may have noticed that the cost of airbag vests has dropped. In one case, something very different is going on here. As reported by Motherboard, you can pick up a KLIM Ai-1 for $400 but the airbag built into it will not function until unlocked with an additional purchase, and a big one at that. So do you really own the vest for $400...?

The Klim airbag vest has two components that make it work. The vest itself is from Klim and costs $400 and arrives along with the airbag unit. But if you want it to actually detect an accident and inflate, you need load up a smartphone app and activate a small black box made by a different company: In&Motion. That requires your choice of another $400 payment or you can subscribe at $12 a month or $120 a year.

If you fail to renew, the vest is essentially worthless.

Hackaday notes it raises the question of what it means to own a piece of technology.

"Do you own your cable modem or cell phone if you aren't allowed to open it up? Do you own a piece of software that wants to call home periodically and won't let you stop it?"
AI

RAI's Certification Process Aims To Prevent AIs From Turning Into HALs (engadget.com) 71

An anonymous reader quotes a report from Engadget: [T]he Responsible Artificial Intelligence Institute (RAI) -- a non-profit developing governance tools to help usher in a new generation of trustworthy, safe, Responsible AIs -- hopes to offer a more standardized means of certifying that our next HAL won't murder the entire crew. In short they want to build "the world's first independent, accredited certification program of its kind." Think of the LEED green building certification system used in construction but with AI instead. Work towards this certification program began nearly half a decade ago alongside the founding of RAI itself, at the hands of Dr. Manoj Saxena, University of Texas Professor on Ethical AI Design, RAI Chairman and a man widely considered to be the "father" of IBM Watson, though his initial inspiration came even further back.

Certifications are awarded in four levels -- basic, silver, gold, and platinum (sorry, no bronze) -- based on the AI's scores along the five OECD principles of Responsible AI: interpretability/explainability, bias/fairness, accountability, robustness against unwanted hacking or manipulation, and data quality/privacy. The certification is administered via questionnaire and a scan of the AI system. Developers must score 60 points to reach the base certification, 70 points for silver and so on, up to 90 points-plus for platinum status. [Mark Rolston, founder and CCO of argodesign] notes that design analysis will play an outsized role in the certification process. "Any company that is trying to figure out whether their AI is going to be trustworthy needs to first understand how they're constructing that AI within their overall business," he said. "And that requires a level of design analysis, both on the technical front and in terms of how they're interfacing with their users, which is the domain of design."

RAI expects to find (and in some cases has already found) a number of willing entities from government, academia, enterprise corporations, or technology vendors for its services, though the two are remaining mum on specifics while the program is still in beta (until November 15th, at least). Saxena hopes that, like the LEED certification, RAI will eventually evolve into a universalized certification system for AI. He argues, it will help accelerate the development of future systems by eliminating much of the uncertainty and liability exposure today's developers -- and their harried compliance officers -- face while building public trust in the brand. "We're using standards from IEEE, we are looking at things that ISO is coming out with, we are looking at leading indicators from the European Union like GDPR, and now this recently announced algorithmic law," Saxena said. "We see ourselves as the 'do tank' that can operationalize those concepts and those think tank's work."

Google

Google Unit DeepMind Tried and Failed to Win AI Autonomy From Parent (wsj.com) 32

Senior managers at Google artificial-intelligence unit DeepMind have been negotiating for years with the parent company for more autonomy, seeking an independent legal structure for the sensitive research they do. From a report: DeepMind told staff late last month that Google called off those talks, WSJ reported Friday, citing people familiar with the matter. The end of the long-running negotiations, which hasn't previously been reported, is the latest example of how Google and other tech giants are trying to strengthen their control over the study and advancement of artificial intelligence. Earlier this month, Google unveiled plans to double the size of its team studying the ethics of artificial intelligence and to consolidate that research.

[...] DeepMind's founders had sought, among other ideas, a legal structure used by nonprofit groups, reasoning that the powerful artificial intelligence they were researching shouldn't be controlled by a single corporate entity, according to people familiar with those plans. On a video call last month with DeepMind staff, co-founder Demis Hassabis said the unit's effort to negotiate a more autonomous corporate structure was over, according to people familiar with the matter. He also said DeepMind's AI research and its application would be reviewed by an ethics board staffed mostly by senior Google executives.

Supercomputing

Google Plans To Build a Commercial Quantum Computer By 2029 (engadget.com) 56

Google developers are confident they can build a commercial-grade quantum computer by 2029. Engadget reports: Google CEO Sundar Pichai announced the plan during today's I/O stream, and in a blog post, quantum AI lead engineer Erik Lucero further outlined the company's goal to "build a useful, error-corrected quantum computer" within the decade. Executives also revealed Google's new campus in Santa Barbara, California, which is dedicated to quantum AI. The campus has Google's first quantum data center, hardware research laboratories, and the company's very own quantum processor chip fabrication facilities.

"As we look 10 years into the future, many of the greatest global challenges, from climate change to handling the next pandemic, demand a new kind of computing," Lucero said. "To build better batteries (to lighten the load on the power grid), or to create fertilizer to feed the world without creating 2 percent of global carbon emissions (as nitrogen fixation does today), or to create more targeted medicines (to stop the next pandemic before it starts), we need to understand and design molecules better. That means simulating nature accurately. But you can't simulate molecules very well using classical computers."

Slashdot Top Deals