×
AI

FTC Issues Stern Warning: Biased AI May Break the Law (protocol.com) 82

The Federal Trade Commission has signaled that it's taking a hard look at bias in AI, warning businesses that selling or using such systems could constitute a violation of federal law. From a report: "The FTC Act prohibits unfair or deceptive practices," the post reads. "That would include the sale or use of -- for example -- racially biased algorithms." The post also notes that biased AI can violate the Fair Credit Reporting Act and the Equal Credit Opportunity Act. "The FCRA comes into play in certain circumstances where an algorithm is used to deny people employment, housing, credit, insurance, or other benefits," it says. "The ECOA makes it illegal for a company to use a biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance." The post mirrors comments made by acting FTC chair Rebecca Slaughter, who recently told Protocol of her intention to ensure that FTC enforcement efforts "continue and sharpen in our long, arduous and very large national task of being anti-racist."
AI

Google Translation AI Botches Legal Terms 'Enjoin,' 'Garnish' (reuters.com) 84

Translation tools from Google and other companies could be contributing to significant misunderstanding of legal terms with conflicting meanings such as "enjoin," according to research due to be presented at an academic workshop on Monday. From a report: Google's translation software turns an English sentence about a court enjoining violence, or banning it, into one in the Indian language of Kannada that implies the court ordered violence, according to the new study. "Enjoin" can refer to either promoting or restraining an action. Mistranslations also arise with other contronyms, or words with contradictory meanings depending on context, including "all over," "eventual" and "garnish," the paper said.

Google said machine translation is "is still just a complement to specialized professional translation" and that it is "continually researching improvements, from better handling ambiguous language, to mitigating bias, to making large quality gains for under-resourced languages." The study's findings add to scrutiny of automated translations generated by artificial intelligence software. Researchers previously have found programs that learn translations by studying non-diverse text perpetuate historical gender biases, such as associating "doctor" with "he." The new paper raises concerns about a popular method companies use to broaden the vocabulary of their translation software. They translate foreign text into English and then back into the foreign language, aiming to teach the software to associate similar ways of saying the same phrase.

AI

US Banks Deploy AI To Monitor Customers, Workers Amid Tech Backlash (reuters.com) 35

Several U.S. banks have started deploying camera software that can analyze customer preferences, monitor workers and spot people sleeping near ATMs, even as they remain wary about possible backlash over increased surveillance, Reuters reported Monday, citing more than a dozen banking and technology sources. From the report: Previously unreported trials at City National Bank of Florida and JPMorgan Chase & Co as well as earlier rollouts at banks such as Wells Fargo & Co offer a rare view into the potential U.S. financial institutions see in facial recognition and related artificial intelligence systems. Widespread deployment of such visual AI tools in the heavily regulated banking sector would be a significant step toward their becoming mainstream in corporate America. Bobby Dominguez, chief information security officer at City National, said smartphones that unlock via a face scan have paved the way. "We're already leveraging facial recognition on mobile," he said. "Why not leverage it in the real world?"

City National will begin facial recognition trials early next year to identify customers at teller machines and employees at branches, aiming to replace clunky and less secure authentication measures at its 31 sites, Dominguez said. Eventually, the software could spot people on government watch lists, he said. JPMorgan said it is "conducting a small test of video analytic technology with a handful of branches in Ohio." Wells Fargo said it works to prevent fraud but declined to discuss how.

AI

Nvidia's CEO Predicts a Metaverse Will Transform Our World (time.com) 120

"Jensen Huang, the CEO of Nvidia, the nation's most valuable semiconductor company, with a stock price of $645 a share and a market cap of $400 billion, is out to create the metaverse," writes Time magazine.

Huang defines it as "a virtual world that is a digital twin of ours." Huang credits author Neal Stephenson's Snow Crash, filled with collectives of shared 3-D spaces and virtually enhanced physical spaces that are extensions of the Internet, for conjuring the metaverse. This is already playing out with the massively popular online games like Fortnite and Minecraft, where users create richly imagined virtual worlds. Now the concept is being put to work by Nvidia and others.

Partnering with Nvidia, BMW is using a virtual digital twin of a factory in Regensburg, Germany, to virtually plan new workflows before deploying the changes in real time in their physical factory. The metaverse, says Huang, "is where we will create the future" and transform how the world's biggest industries operate...

Not to make any value judgments about the importance of video games, but do you find it ironic that a company that has its roots in entertainment is now providing vitally important computing power for drug discovery, basic research and reinventing manufacturing?

No, not at all. It's actually the opposite. We always started as a computing company. It just turned out that our first killer app was video games...

How important is the advent and the adaptation of digital twins for manufacturing, business and society at large?

In the future, the digital world or the virtual world will be thousands of times bigger than the physical world. There will be a new New York City. There'll be a new Shanghai. Every single factory and every single building will have a digital twin that will simulate and track the physical version of it. Always. By doing so, engineers and software programmers could simulate new software that will ultimately run in the physical version of the car, the physical version of the robot, the physical version of the airport, the physical version of the building. All of the software that's going to be running in these physical things will be simulated in the digital twin first, and then it will be downloaded into the physical version. And as a result, the product keeps getting better at an exponential rate.

The second thing is, you're going to be able to go in and out of the two worlds through wormholes. We'll go into the virtual world using virtual reality, and the objects in the virtual world, in the digital world, will come into the physical world, using augmented reality. So what's going to happen is pieces of the digital world will be temporarily, or even semipermanently, augmenting our physical world. It's ultimately about the fusion of the virtual world and the physical world.

See also this possibly related story, "Nvidia's newest AI model can transform single images into realistic 3D models."
Transportation

'No One Was Driving the Car': 2 Dead After Fiery Tesla Crash (click2houston.com) 340

Texas TV station KPRC 2 reports that two men are dead after a Tesla "crashed into a tree and no one was driving the vehicle, officials say."

Long-time Slashdot readers AmiMoJo and McGruber both submitted the story: There was a person in the passenger seat of the front of the car and in the rear passenger seat of the car. Harris County Precinct 4 Constable Mark Herman said authorities believe no one else was in the car and that it burst into flames immediately. He said it he believes it wasn't being driven by a human.

Harris County Constable Precinct 4 deputies said the vehicle was traveling at a high speed when it failed to negotiate a cul-de-sac turn, ran off the road and hit the tree.

KPRC 2 reporter Deven Clarke spoke to one man's brother-in-law who said he was taking the car out for a spin with his best friend, so there were just two in the vehicle. The owner, he said, backed out of the driveway, and then may have hopped in the back seat only to crash a few hundred yards down the road...

Authorities said they used 32,000 gallons of water to extinguish the flames because the vehicle's batteries kept reigniting. At one point, Herman said, deputies had to call Tesla to ask them how to put out the fire in the battery.

Space

How OneWeb, SpaceX Satellites Dodged a Potential Collision in Orbit (theverge.com) 40

"Two satellites from the fast-growing constellations of OneWeb and SpaceX's Starlink dodged a dangerously close approach with one another in orbit," reported The Verge, citing representatives from both OneWeb and the U.S. Space Force.

UPDATE (April 22): SpaceX strongly disputes OneWeb's characterization of the event.

Below is the Verge's original report: On March 30th, five days after OneWeb launched its latest batch of 36 satellites from Russia, the company received several "red alerts" from the US Space Force's 18th Space Control Squadron warning of a possible collision with a Starlink satellite. Because OneWeb's constellation operates in higher orbits around Earth, the company's satellites must pass through SpaceX's mesh of Starlink satellites, which orbit at an altitude of roughly 550 km.

One Space Force alert indicated a collision probability of 1.3 percent, with the two satellites coming as close as 190 feet — a dangerously close proximity for satellites in orbit. If satellites collide in orbit, it could cause a cascading disaster that could generate hundreds of pieces of debris and send them on crash courses with other satellites nearby...

Space Force's urgent alerts sent OneWeb engineers scrambling to email SpaceX's Starlink team to coordinate maneuvers that would put the two satellites at safer distances from one another. While coordinating with OneWeb, SpaceX disabled its automated AI-powered collision avoidance system to allow OneWeb to steer its satellite out of the way, according to OneWeb's government affairs chief Chris McLaughlin... SpaceX's automated system for avoiding satellite collisions has sparked controversy, raising concerns from other satellite operators who say they have no way of knowing which way the system will move a Starlink satellite in the event of a close approach.

AI

AI-Driven Audio Cloning Startup Gives Voice To Einstein Chatbot (techcrunch.com) 23

Aflorithmic, an AI-driven audio cloning startup, has created a digital version of Albert Einstein using AI voice cloning technology drawing on audio records of the famous scientist's actual voice. TechCrunch reports: Alforithmic says the "digital Einstein" is intended as a showcase for what will soon be possible with conversational social commerce. Which is a fancy way of saying deepfakes that make like historical figures will probably be trying to sell you pizza soon enough, as industry watchers have presciently warned. The startup also says it sees educational potential in bringing famous, long-deceased figures to interactive "life." Or, well, an artificial approximation of it -- the "life" being purely virtual and Digital Einstein's voice not being a pure tech-powered clone either; Alforithmic says it also worked with an actor to do voice modelling for the chatbot (because how else was it going to get Digital Einstein to be able to say words the real-deal would never even have dreamt of saying -- like, er, "blockchain"?). So there's a bit more than AI artifice going on here too.

In a blog post discussing how it recreated Einstein's voice the startup writes about progress it made on one challenging element associated with the chatbot version -- saying it was able to shrink the response time between turning around input text from the computational knowledge engine to its API being able to render a voiced response, down from an initial 12 seconds to less than three (which it dubs "near-real-time"). But it's still enough of a lag to ensure the bot can't escape from being a bit tedious.
The report notes that the video engine powering the 3D character rendering components of this "digital human" version of Einstein is the work of another synthesized media company, UneeQ, which is hosting the interactive chatbot version on its website.
AI

Google Researchers Boost Speech Recognition Accuracy With More Datasets 16

What if the key to improving speech recognition accuracy is simply mixing all available speech datasets together to train one large AI model? That's the hypothesis behind a recent study published by a team of researchers affiliated with Google Research and Google Brain. They claim an AI model named SpeechStew that was trained on a range of speech corpora achieves state-of-the-art or near-state-of-the-art results on a variety of speech recognition benchmarks. VentureBeat reports: In pursuit of a solution, the Google researchers combined all available labeled and unlabelled speech recognition data curated by the community over the years. They drew on AMI, a dataset containing about 100 hours of meeting recordings, as well as corpora that include Switchboard (approximately 2,000 hours of telephone calls), Broadcast News (50 hours of television news), Librispeech (960 hours of audiobooks), and Mozilla's crowdsourced Common Voice. Their combined dataset had over 5,000 hours of speech -- none of which was adjusted from its original form. With the assembled dataset, the researchers used Google Cloud TPUs to train SpeechStew, yielding a model with more than 100 million parameters. In machine learning, parameters are the properties of the data that the model learned during the training process. The researchers also trained a 1-billion-parameter model, but it suffered from degraded performance.

Once the team had a general-purpose SpeechStew model, they tested it on a number of benchmarks and found that it not only outperformed previously developed models but demonstrated an ability to adapt to challenging new tasks. Leveraging Chime-6, a 40-hour dataset of distant conversations in homes recorded by microphones, the researchers fine-tuned SpeechStew to achieve accuracy in line with a much more sophisticated model. Transfer learning entails transferring knowledge from one domain to a different domain with less data, and it has shown promise in many subfields of AI. By taking a model like SpeechStew that's designed to understand generic speech and refining it at the margins, it's possible for AI to, for example, understand speech in different accents and environments.
Robotics

Farming Startup Unveils Self-Driving Robot That Uses AI To Zap Weeds (geekwire.com) 98

Carbon Robotics, a Seattle company led by Isilon Systems co-founder Paul Mikesell, is unveiling its self-driving robot that uses artificial intelligence to identify weeds growing in fields of vegetables, then zaps them with precision thermal bursts from lasers. GeekWire reports: [W]hat farmers need is less a revolution in farming methods than a revolutionary tool that fits into their current farming patterns, Mikesell said. Carbon worked closely with farmers in eastern Oregon and southern Idaho, he said. As a result, Carbon's robot system -- the Autonomous Weeder -- was built about the size of a medium tractor so it would fit in the furrows between rows of common crops like onions and sweet potatoes.

It can cover up to 16 acres of cropland a day, zapping as many as 100,000 weeds an hour, Mikesell said. And since it's self-driving, all a farmer has to do is take it to the field in the morning and turn it on. "We're really intent on not making farmers have to change how they're doing things," Mikesell said. "That's been a key to our success. We fit right into their operations."

Carbon has sold out all the robots it built for the 2021 planting season, and is looking for an industrial partner who could help it build more units for 2022, Mikesell said. The company is looking to get into the hundreds of units built and shipped for next year, he said. "There's a demand for a lot more than that, tens or hundreds of thousands of them."

Robotics

Korean Workers Need To Make Space For Robots, Minister Says (bloomberg.com) 26

An anonymous reader quotes a report from Bloomberg: South Koreans must learn how to work alongside machines if they want to thrive in a post-pandemic world where many jobs will be handled by artificial intelligence and robots, according to the country's labor minister. "Automation and AI will change South Korea faster than other countries," Minister of Employment and Labor Lee Jae-kap said in an interview Tuesday. "Not all jobs may be replaced by machines, but it's important to learn ways to work well with machines through training."

While people will have to increase their adaptability to work in a fast-changing high-tech environment, policy makers will also need to play their part, Lee said. The government needs to provide support to enable workers to move from one sector of the economy to another in search of employment and find ways to increase the activity of women in the economy, he added. The minister's remarks underline the determination of President Moon Jae-in's government to press ahead with a growth strategy built around tech even if it risks alienating the country's unions -- an important base of support for the ruling camp -- in the short term. "New jobs will be created as technology advances," Lee said. "What's important in policy is how to support a worker move from a fading sector to an emerging one."
The government is looking to help with this transition by expanding its employment insurance program to 21 million people, or more than 40% of the population, by 2025. "The program is part of a government initiative to provide financial support in the form of insurance for every worker in the country, whether they are artists, freelancers or deliverymen on digital platforms," adds Bloomberg.

"Separately, the government is providing stipends for young people to encourage them to keep searching for work, as their struggle to stay employed amid slowing economic growth has been made tougher by the pandemic."
Mars

What Happens When You Have a Heart Attack on the Way To Mars? (wired.co.uk) 70

If your heart stops en route to Mars, rest assured that researchers have considered how to carry out CPR in space. (One option is to plant your feet on the ceiling and extend your arms downwards to compress the patient's chest.) From a report: Astronauts, because of their age range and high physical fitness, are unlikely to suffer a stroke or have their appendix suddenly explode. That's good because, if it does happen, they're in the realm of what Jonathan Scott -- head of the medical projects and technology team at the European Space Agency -- describes as 'treatment futility.' In other words: there's nothing anyone can do about it. On the ISS, when medical incidents arise, astronauts can draw on the combined expertise of a host of medical experts at Nasa. "The patient is on the space station, the doctor is on the ground, and if there's a problem the patient consults the doctor," says Scott. By the time astronauts reach Mars, there'll be a 40-minute time lag in communications, if it's possible to make contact at all. "We have to begin preparing for not only being able to diagnose things in spaceflight but also to treat them as well," Scott says.

Artificial intelligence is likely to be a part of the solution. If you're imagining the holographic doctor from Star Trek, downgrade your expectations, at least for the next few decades. Kris Lehnhardt, the element scientist for exploration medical capability at Nasa, says: "We are many, many, many years away from: please state the nature of the medical emergency." Emmanuel Urquieta is deputy chief scientist at the Translational Institute for Space Health (TRISH), a Nasa-funded program which conducts research into healthcare for deep space missions. While full AI may be a way off, Urquieta believes some form of artificial intelligence will still play a crucial role. "It's going to be essential for a mission to Mars," he says. While the crew for a mission to Mars will likely include a medical doctor, he explains: "No single physician can know everything." And, of course: "What happens if that astronaut gets sick?" Research projects funded by TRISH include Butterfly iQ, a handheld ultrasound device for use by non-medical personnel to make diagnoses that would otherwise require bulky equipment and a trained operator. VisualDx is an AI diagnostics tool originally developed to analyse images and identify skin conditions. The technology is now being adapted to help astronauts diagnose a wide range of conditions most commonly encountered in space, without an internet connection.

AI

Detroit Man Sues Police For Wrongfully Arresting Him Based On Facial Recognition 92

A man who was falsely accused of shoplifting has sued the Detroit Police Department for arresting him based on an incorrect facial recognition match. The American Civil Liberties Union filed suit on behalf of Robert Williams, whom it calls the first US person wrongfully arrested based on facial recognition. The Verge reports: The Detroit Police Department arrested Williams in 2019 after examining security footage from a shoplifting incident. A detective used facial recognition technology on a grainy image from the video, and the system flagged Williams as a potential match based on a driver's license photo. But as the lawsuit notes, facial recognition is frequently inaccurate, particularly with Black subjects and a low-quality picture. The department then produced a photo lineup that included Williams' picture, showed it to a security guard who hadn't actually witnessed the shoplifting incident, and obtained a warrant when that guard picked him from the lineup.

Williams -- who had been driving home from work during the incident -- spent 30 hours in a detention center. The ACLU later filed a formal complaint on his behalf, and the prosecutor's office apologized, saying he could have the case expunged from his records. The ACLU claims Detroit police used facial recognition under circumstances that they should have known would produce unreliable results, then dishonestly failed to mention the system's shortcomings -- including a "woefully substandard" image and the known racial bias of recognition systems.
Open Source

Inspur, China's Largest Cloud Hardware Vendor, Joins Open-Source Patent Consortium (zdnet.com) 7

An anonymous reader quotes a report from ZDNet: The Open Invention Network (OIN) defends the intellectual property (IP) rights of Linux and open-source software developers from patent trolls and the like. This is a global fight and now the OIN has a new, powerful allied member in China: Inspur. Inspur is a leading worldwide provider and China's leading data center infrastructure, cloud computing, and artificial intelligence (AI) server providers. While not a household name like Lenovo, Inspur ranks among the world's top-three server manufacturers.

Inspur is only the latest of many companies to join the OIN. Besides such primarily hardware-oriented companies as Inspur, Baidu, China's largest search engine company, and global banks such as Barclays and the TD Bank Group, have joined the OIN. In 2021, companies far removed from traditional Linux companies such as Canonical, Red Hat, and SUSE all recognize Linux and OSS's importance. Donny Zhang, VP of Inspur information, said, "Linux and open source are critical elements in technologies which we are developing and provisioning. By joining the Open Invention Network, we are demonstrating our continued commitment to innovation, and supporting it with patent non-aggression in core Linux and adjacent open-source software."
"Linux is rewriting what is possible in infrastructure computing," says OIN CEO Keith Bergelt. "OSS-based cloud computing and on-premise data centers are driving down the cost-per-compute while significantly increasing businesses' ability to provision AI and machine-learning (ML) capabilities. We appreciate Inspur's participation in joining OIN and demonstrating its commitment to innovation and patent non-aggression in open source."
EU

EU Poised To Set AI Rules That Would Ban Surveillance and Social Behavior Ranking (bloomberg.com) 73

The European Union is poised to ban artificial intelligence systems used for mass surveillance or for ranking social behavior, while companies developing AI could face fines as high as 4% of global revenue if they fail to comply with new rules governing the software applications. From a report: The rules are part of legislation set to be proposed by the European Commission, the bloc's executive body, according to a draft of the proposal obtained by Bloomberg. The details could change before the commission unveils the measure, which is expected to be as soon as next week. The EU proposal is expected to include the following rules:

* AI systems used to manipulate human behavior, exploit information about individuals or groups of individuals, used to carry out social scoring or for indiscriminate surveillance would all be banned in the EU. Some public security exceptions would apply.
* Remote biometric identification systems used in public places, like facial recognition, would need special authorization from authorities.
* AI applications considered to be 'high-risk' would have to undergo inspections before deployment to ensure systems are trained on unbiased data sets, in a traceable way and with human oversight.
* High-risk AI would pertain to systems that could endanger people's safety, lives or fundamental rights, as well as the EU's democratic processes -- such as self-driving cars and remote surgery, among others.
* Some companies will be allowed to undertake assessments themselves, whereas others will be subject to checks by third-parties. Compliance certificates issued by assessment bodies will be valid for up to five years.
* Rules would apply equally to companies based in the EU or abroad.

Intel

Intel's Dystopian Anti-Harassment AI Lets Users Opt In for 'Some' Racism (vice.com) 131

Intel is launching an artificial intelligence application that will recognize and redact hate speech in real-time. It's called Bleep, and Intel hopes it'll help with one of gaming's oldest and most intractable problems -- people can be real pieces of shit online. From a report: A video of the app shows that it will allow users to customize what kind and how much hate speech they want to see, including "Racism" and "White Nationalism" sliders that can be set to "none," "some," "most," or "all," and a separate on and off toggle for the "N-word." "While we recognize that solutions like Bleep don't erase the problem, we believe it's a step in the right direction -- giving gamers a tool to control their experience," Roger Chandler, Vice President and General Manager of Intel Client Product Solutions, said during a virtual presentation at 2021's Game Developers Conference.

According to Intel Marketing Engineer Craig Raymond, Bleep is "an end-user application that uses AI to detect and redact audio based on your user preferences." In footage of the application, Bleep presented users with a list of sliders so gamers can control the amount of hate and abuse they encounter. The list included ableism and body shaming, LGBTQ+ hate, aggression, misogyny, name-calling, racism and xenophobia, sexually explicit language, swearing, and white nationalism. As Chandler explained, Intel can't "solve" racism or the long-running and well-documented problems in gaming culture (and culture more broadly). At the same time, Bleep is techno-AI solutionism that feels pretty dystopian, pitching racism, xenophobia, and general toxicity as settings that can be tuned up and down as though they were graphics, sound, or control sliders on a video game. It is also a way of admitting defeat: if we can't stop players from being incredibly racist in chat, we can simply filter out what they say and pretend they don't exist.

AI

Government Audit of AI With Ties To White Supremacy Finds No AI (venturebeat.com) 148

Khari Johnson writes via VentureBeat: In April 2020, news broke that Banjo CEO Damien Patton, once the subject of profiles by business journalists, was previously convicted of crimes committed with a white supremacist group. According to OneZero's analysis of grand jury testimony and hate crime prosecution documents, Patton pled guilty to involvement in a 1990 shooting attack on a synagogue in Tennessee. Amid growing public awareness about algorithmic bias, the state of Utah halted a $20.7 million contract with Banjo, and the Utah attorney general's office opened an investigation into matters of privacy, algorithmic bias, and discrimination. But in a surprise twist, an audit and report released last week found no bias in the algorithm because there was no algorithm to assess in the first place.

"Banjo expressly represented to the Commission that Banjo does not use techniques that meet the industry definition of artificial Intelligence. Banjo indicated they had an agreement to gather data from Twitter, but there was no evidence of any Twitter data incorporated into Live Time," reads a letter Utah State Auditor John Dougall released last week. The incident, which VentureBeat previously referred to as part of a "fight for the soul of machine learning," demonstrates why government officials must evaluate claims made by companies vying for contracts and how failure to do so can cost taxpayers millions of dollars. As the incident underlines, companies selling surveillance software can make false claims about their technologies' capabilities or turn out to be charlatans or white supremacists -- constituting a public nuisance or worse. The audit result also suggests a lack of scrutiny can undermine public trust in AI and the governments that deploy them.

Google

Google AI Research Manager Quits After Two Ousted From Group (bloomberg.com) 82

Google research manager Samy Bengio, who oversaw the company's AI ethics group until a controversy led to the ouster of two female leaders, resigned on Tuesday to pursue other opportunities. Bloomberg reports: Bengio, who managed hundreds of researchers in the Google Brain team, announced his departure in an email to staff that was obtained by Bloomberg. His last day will be April 28. An expert in a type of AI known as machine learning, Bengio joined Google in 2007. Ousted Ethical AI co-leads Timnit Gebru and Margaret Mitchell had reported to Bengio and considered him an ally. In February, Google reorganized the research unit, placing the remaining Ethical AI group members under Marian Croak, cutting Bengio's responsibilities.

"While I am looking forward to my next challenge, there's no doubt that leaving this wonderful team is really difficult," Bengio wrote in the email. "I learned so much with all of you, in terms of machine learning research of course, but also on how difficult yet important it is to organize a large team of researchers so as to promote long term ambitious research, exploration, rigor, diversity and inclusion," Bengio wrote in his email. He did not refer to Gebru, Mitchell or the disagreements that led to their departures. [...]

Intel

Intel Launches First 10nm 3rd Gen Xeon Scalable Processors For Data Centers (hothardware.com) 42

MojoKid writes: Intel just officially launched its first server products built on its advanced 10nm manufacturing process node, the 3rd Gen Xeon Scalable family of processors. 3rd Gen Xeon Scalable processors are based on the 10nm Ice Lake-SP microarchitecture, which incorporates a number of new features and enhancements. Core counts have been significantly increased with this generation, and now offer up to 40 cores / 80 threads per socket versus 28 cores / 56 threads in Intel's previous-gen offerings. The 3rd Gen Intel Xeon Scalable processor platform also supports up to 8 channels of DDR4-3200 memory, up to 6 terabytes of total memory, and up to 64 lanes of PCIe Gen4 connectivity per socket, for more bandwidth, higher capacity, and copious IO.

New AI, security and cryptographic capabilities arrive with the platform as well. Across Cloud, HPC, 5G, IoT, and AI workloads, new 3rd Gen Xeon Scalable processors are claimed to offer significant uplifts across the board versus their previous-gen counterparts. And versus rival AMD's EPYC platform, Intel is also claiming many victories, specifically when AVX-512, new crypto instructions, or DL Boost are added to the equation. Core counts in the line-up range from 8 — 40 cores per processor and TDPs vary depending on the maximum base and boost frequencies and core count / configuration (up to a 270W TDP). Intel is currently shipping 3rd Gen Xeon Scalable CPUs to key customers now, with over 200K chips in Q1 this year and a steady ramp-up to follow.

IBM

Why IBM is Pushing 'Fully Homomorphic Encryption' (venturebeat.com) 122

VentureBeat reports on a "next-generation security" technique that allows data to remain encrypted while it's being processed.

"A security process known as fully homomorphic encryption is now on the verge of making its way out of the labs and into the hands of early adopters after a long gestation period." Companies such as Microsoft and Intel have been big proponents of homomorphic encryption. Last December, IBM made a splash when it released its first homomorphic encryption services. That package included educational material, support, and prototyping environments for companies that want to experiment. In a recent media presentation on the future of cryptography, IBM director of strategy and emerging technology Eric Maass explained why the company is so bullish on "fully homomorphic encryption" (FHE)...

"IBM has been working on FHE for more than a decade, and we're finally reaching an apex where we believe this is ready for clients to begin adopting in a more widespread manner," Maass said. "And that becomes the next challenge: widespread adoption. There are currently very few organizations here that have the skills and expertise to use FHE." To accelerate that development, IBM Research has released open source toolkits, while IBM Security launched its first commercial FHE service in December...

Maass said in the near term, IBM envisions FHE being attractive to highly regulated industries, such as financial services and health care. "They have both the need to unlock the value of that data, but also face extreme pressures to secure and preserve the privacy of the data that they're computing upon," he said.

The Wikipedia entry for homomorphic encryption calls it "an extension of either symmetric-key or public-key cryptography."
AI

A South Korean Chatbot Showed How Sloppy Tech Companies Can Be With User Data (slate.com) 11

A "Science of Love" app analyzed text conversations uploaded by its users to assess the degree of romantic feelings (based on the phrases and emojis used and the average response time). Then after more than four years, its parent company ScatterLab introduced a conversational A.I. chatbot called Lee-Luda — which it said had been trained on 10 billion such conversational logs.

But because it used billions of conversations from real people, its problems soon went beyond sexually explicit comments and "verbally abusive" language: It also soon became clear that the huge training dataset included personal and sensitive information. This revelation emerged when the chatbot began exposing people's names, nicknames, and home addresses in its responses. The company admitted that its developers "failed to remove some personal information depending on the context," but still claimed that the dataset used to train chatbot Lee-Luda "did not include names, phone numbers, addresses, and emails that could be used to verify an individual." However, A.I. developers in South Korea rebutted the company's statement, asserting that Lee-Luda could not have learned how to include such personal information in its responses unless they existed in the training dataset. A.I. researchers have also pointed out that it is possible to recover the training dataset from the AI chatbot. So, if personal information existed in the training dataset, it can be extracted by querying the chatbot.

To make things worse, it was also discovered that ScatterLab had, prior to Lee-Luda's release, uploaded a training set of 1,700 sentences, which was a part of the larger dataset it collected, on Github. Github is an open-source platform that developers use to store and share code and data. This Github training dataset exposed names of more than 20 people, along with the locations they have been to, their relationship status, and some of their medical information...

[T]his incident highlights the general trend of the A.I. industry, where individuals have little control over how their personal information is processed and used once collected. It took almost five years for users to recognize that their personal data were being used to train a chatbot model without their consent. Nor did they know that ScatterLab shared their private conversations on an open-source platform like Github, where anyone can gain access.

What makes this unusual, the article points out, is how the users became aware of just how much their privacy had actually been compromised. "[B]igger tech companies are usually much better at hiding what they actually do with user data, while restricting users from having control and oversight over their own data."

And "Once you give, there's no taking back."

Slashdot Top Deals