Biz & IT – Ars Technica https://arstechnica.com Serving the Technologist for more than a decade. IT news, reviews, and analysis. Fri, 16 Aug 2024 19:20:07 +0000 en-US hourly 1 https://wordpress.org/?v=6.0.3 https://cdn.arstechnica.net/wp-content/uploads/2016/10/cropped-ars-logo-512_480-32x32.png Biz & IT – Ars Technica https://arstechnica.com 32 32 Chinese social media users hilariously mock AI video fails https://arstechnica.com/?p=2043808 https://arstechnica.com/information-technology/2024/08/viral-trend-sees-humans-simulating-bizarre-ai-video-glitches/#comments Fri, 16 Aug 2024 19:09:03 +0000 https://arstechnica.com/?p=2043808
Still from a Chinese social media video featuring two people imitating imperfect AI-generated video outputs.

Enlarge / Still from a Chinese social media video featuring two people imitating imperfect AI-generated video outputs. (credit: BiliBili)

It's no secret that despite significant investment from companies like OpenAI and Runway, AI-generated videos still struggle to achieve convincing realism at times. Some of the most amusing fails end up on social media, which has led to a new response trend on Chinese social media platforms TikTok and Bilibili where users create videos that mock the imperfections of AI-generated content. The trend has since spread to X (formerly Twitter) in the US, where users have been sharing the humorous parodies.

In particular, the videos seem to parody image synthesis videos where subjects seamlessly morph into other people or objects in unexpected and physically impossible ways. Chinese social media replicate these unusual visual non-sequiturs without special effects by positioning their bodies in unusual ways as new and unexpected objects appear on-camera from out of frame.

This exaggerated mimicry has struck a chord with viewers on X, who find the parodies entertaining. User @theGioM shared one video, seen above. "This is high-level performance arts," wrote one X user. "art is imitating life imitating ai, almost shedded a tear." Another commented, "I feel like it still needs a motorcycle the turns into a speedboat and takes off into the sky. Other than that, excellent work."

Read 4 remaining paragraphs | Comments

]]>
https://arstechnica.com/information-technology/2024/08/viral-trend-sees-humans-simulating-bizarre-ai-video-glitches/feed/ 80
Google’s threat team confirms Iran targeting Trump, Biden, and Harris campaigns https://arstechnica.com/?p=2043545 https://arstechnica.com/security/2024/08/googles-threat-team-confirms-iran-targeting-trump-biden-and-harris-campaigns/#comments Thu, 15 Aug 2024 18:26:50 +0000 https://arstechnica.com/?p=2043545
Roger Stone, former adviser to Donald Trump's presidential campaign, center, during the Republican National Convention (RNC) in Milwaukee on July 17, 2024.

Enlarge / Roger Stone, former adviser to Donald Trump's presidential campaign, center, during the Republican National Convention (RNC) in Milwaukee on July 17, 2024. (credit: Getty Images)

Google's Threat Analysis Group confirmed Wednesday that they observed a threat actor backed by the Iranian government targeting Google accounts associated with US presidential campaigns, in addition to stepped-up attacks on Israeli targets.

APT42, associated with Iran's Islamic Revolutionary Guard Corps, "consistently targets high-profile users in Israel and the US," the Threat Analysis Group (TAG) writes. The Iranian group uses hosted malware, phishing pages, malicious redirects, and other tactics to gain access to Google, Dropbox, OneDrive, and other cloud-based accounts. Google's TAG writes that it reset accounts, sent warnings to users, and blacklisted domains associated with APT42's phishing attempts.

Among APT42's tools were Google Sites pages that appeared to be a petition from legitimate Jewish activists, calling on Israel to mediate its ongoing conflict with Hamas. The page was fashioned from image files, not HTML, and an ngrok redirect sent users to phishing pages when they moved to sign the petition.

Read 7 remaining paragraphs | Comments

]]>
https://arstechnica.com/security/2024/08/googles-threat-team-confirms-iran-targeting-trump-biden-and-harris-campaigns/feed/ 99
Musk’s new Grok upgrade allows X users to create largely uncensored AI images https://arstechnica.com/?p=2043070 https://arstechnica.com/information-technology/2024/08/musks-new-grok-upgrade-allows-x-users-to-create-largely-uncensored-ai-images/#comments Wed, 14 Aug 2024 22:22:14 +0000 https://arstechnica.com/?p=2043070
An AI-generated image of Donald Trump and catgirls created with Grok, which uses the Flux image synthesis model.

Enlarge / An AI-generated image of Donald Trump and catgirls created with Grok, which uses the Flux image synthesis model. (credit: BEAST / X)

On Tuesday, Elon Musk's AI company, xAI, announced the beta release of two new language models, Grok-2 and Grok-2 mini, available to subscribers of his social media platform, X (formerly Twitter). The models are also linked to the recently released Flux image-synthesis model, which allows X users to create largely uncensored photorealistic images that can be shared on the site.

"Flux, accessible through Grok, is an excellent text-to-image generator, but it is also really good at creating fake photographs of real locations and people, and sending them right to Twitter," wrote frequent AI commentator Ethan Mollick on X. "Does anyone know if they are watermarking these in any way? It would be a good idea."

In a report posted earlier today, The Verge noted that Grok's image-generation capabilities appear to have minimal safeguards, allowing users to create potentially controversial content. According to their testing, when prompted, Grok produced images depicting political figures in compromising situations, copyrighted characters, and scenes of violence.

Read 13 remaining paragraphs | Comments

]]>
https://arstechnica.com/information-technology/2024/08/musks-new-grok-upgrade-allows-x-users-to-create-largely-uncensored-ai-images/feed/ 324
Research AI model unexpectedly modified its own code to extend runtime https://arstechnica.com/?p=2043110 https://arstechnica.com/information-technology/2024/08/research-ai-model-unexpectedly-modified-its-own-code-to-extend-runtime/#comments Wed, 14 Aug 2024 20:13:40 +0000 https://arstechnica.com/?p=2043110
Illustration of a robot generating endless text, controlled by a scientist.

Enlarge (credit: Moor Studio via Getty Images)

On Tuesday, Tokyo-based AI research firm Sakana AI announced a new AI system called "The AI Scientist" that attempts to conduct scientific research autonomously using AI language models (LLMs) similar to what powers ChatGPT. During testing, Sakana found that its system began unexpectedly attempting to modify its own experiment code to extend the time it had to work on a problem.

"In one run, it edited the code to perform a system call to run itself," wrote the researchers on Sakana AI's blog post. "This led to the script endlessly calling itself. In another case, its experiments took too long to complete, hitting our timeout limit. Instead of making its code run faster, it simply tried to modify its own code to extend the timeout period."

Sakana provided two screenshots of example Python code that the AI model generated for the experiment file that controls how the system operates. The 185-page AI Scientist research paper discusses what they call "the issue of safe code execution" in more depth.

Read 12 remaining paragraphs | Comments

]]>
https://arstechnica.com/information-technology/2024/08/research-ai-model-unexpectedly-modified-its-own-code-to-extend-runtime/feed/ 118
Self-driving Waymo cars keep SF residents awake all night by honking at each other https://arstechnica.com/?p=2042921 https://arstechnica.com/information-technology/2024/08/self-driving-waymo-cars-keep-sf-residents-awake-all-night-by-honking-at-each-other/#comments Tue, 13 Aug 2024 19:44:13 +0000 https://arstechnica.com/?p=2042921
A Waymo self-driving car in front of Google's San Francisco headquarters, San Francisco, California, June 7, 2024.

Enlarge / A Waymo self-driving car in front of Google's San Francisco headquarters, San Francisco, California, June 7, 2024. (credit: Getty Images)

Silicon Valley's latest disruption? Your sleep schedule. On Saturday, NBC Bay Area reported that San Francisco's South of Market residents are being awakened throughout the night by Waymo self-driving cars honking at each other in a parking lot. No one is inside the cars, and they appear to be automatically reacting to each other's presence.

Videos provided by residents to NBC show Waymo cars filing into the parking lot and attempting to back into spots, which seems to trigger honking from other Waymo vehicles. The automatic nature of these interactions—which seem to peak around 4 am every night—has left neighbors bewildered and sleep-deprived.

NBC Bay Area's report: "Waymo cars keep SF neighborhood awake."

According to NBC, the disturbances began several weeks ago when Waymo vehicles started using a parking lot off 2nd Street near Harrison Street. Residents in nearby high-rise buildings have observed the autonomous vehicles entering the lot to pause between rides, but the cars' behavior has become a source of frustration for the neighborhood.

Read 3 remaining paragraphs | Comments

]]>
https://arstechnica.com/information-technology/2024/08/self-driving-waymo-cars-keep-sf-residents-awake-all-night-by-honking-at-each-other/feed/ 197
Deep-Live-Cam goes viral, allowing anyone to become a digital doppelganger https://arstechnica.com/?p=2042710 https://arstechnica.com/information-technology/2024/08/new-ai-tool-enables-real-time-face-swapping-on-webcams-raising-fraud-concerns/#comments Tue, 13 Aug 2024 15:36:26 +0000 https://arstechnica.com/?p=2042710
A still video capture of X user João Fiadeiro replacing his face with J.D. Vance in a test of Deep-Live-Cam.

Enlarge / A still video capture of X user João Fiadeiro replacing his face with J.D. Vance in a test of Deep-Live-Cam.

Over the past few days, a software package called Deep-Live-Cam has been going viral on social media because it can take the face of a person extracted from a single photo and apply it to a live webcam video source while following pose, lighting, and expressions performed by the person on the webcam. While the results aren't perfect, the software shows how quickly the tech is developing—and how the capability to deceive others remotely is getting dramatically easier over time.

The Deep-Live-Cam software project has been in the works since late last year, but example videos that show a person imitating Elon Musk and Republican Vice Presidential candidate J.D. Vance (among others) in real time have been making the rounds online. The avalanche of attention briefly made the open source project leap to No. 1 on GitHub's trending repositories list (it's currently at No. 4 as of this writing), where it is available for download for free.

"Weird how all the major innovations coming out of tech lately are under the Fraud skill tree," wrote illustrator Corey Brickley in an X thread reacting to an example video of Deep-Live-Cam in action. In another post, he wrote, "Nice remember to establish code words with your parents everyone," referring to the potential for similar tools to be used for remote deception—and the concept of using a safe word, shared among friends and family, to establish your true identity.

Read 7 remaining paragraphs | Comments

]]>
https://arstechnica.com/information-technology/2024/08/new-ai-tool-enables-real-time-face-swapping-on-webcams-raising-fraud-concerns/feed/ 154
Nashville man arrested for running “laptop farm” to get jobs for North Koreans https://arstechnica.com/?p=2042326 https://arstechnica.com/security/2024/08/nashville-man-arrested-for-running-laptop-farm-to-get-jobs-for-north-koreans/#comments Fri, 09 Aug 2024 20:31:13 +0000 https://arstechnica.com/?p=2042326
Nashville man arrested for running “laptop farm” to get jobs for North Koreans

Enlarge

Federal authorities have arrested a Nashville man on charges he hosted laptops at his residences in a scheme to deceive US companies into hiring foreign remote IT workers who funneled hundreds of thousands of dollars in income to fund North Korea’s weapons program.

The scheme, federal prosecutors said, worked by getting US companies to unwittingly hire North Korean nationals, who used the stolen identity of a Georgia man to appear to be a US citizen. Under sanctions issued by the federal government, US employers are strictly forbidden from hiring citizens of North Korea. Once the North Korean nationals were hired, the employers sent company-issued laptops to Matthew Isaac Knoot, 38, of Nashville, Tennessee, the prosecutors said in court papers filed in the US District Court of the Middle District of Tennessee. The court documents also said a foreign national with the alias Yang Di was involved in the conspiracy.

The prosecutors wrote:

Read 6 remaining paragraphs | Comments

]]>
https://arstechnica.com/security/2024/08/nashville-man-arrested-for-running-laptop-farm-to-get-jobs-for-north-koreans/feed/ 139
ChatGPT unexpectedly began speaking in a user’s cloned voice during testing https://arstechnica.com/?p=2042102 https://arstechnica.com/information-technology/2024/08/chatgpt-unexpectedly-began-speaking-in-a-users-cloned-voice-during-testing/#comments Fri, 09 Aug 2024 16:40:57 +0000 https://arstechnica.com/?p=2042102
An illustration of a computer synthesizer spewing out letters.

Enlarge (credit: Ole_CNX via Getty Images)

On Thursday, OpenAI released the "system card" for ChatGPT's new GPT-4o AI model that details model limitations and safety testing procedures. Among other examples, the document reveals that in rare occurrences during testing, the model's Advanced Voice Mode unintentionally imitated users' voices without permission. Currently, OpenAI has safeguards in place that prevent this from happening, but the instance reflects the growing complexity of safely architecting with an AI chatbot that could potentially imitate any voice from a small clip.

Advanced Voice Mode is a feature of ChatGPT that allows users to have spoken conversations with the AI assistant.

In a section of the GPT-4o system card titled "Unauthorized voice generation," OpenAI details an episode where a noisy input somehow prompted the model to suddenly imitate the user's voice. "Voice generation can also occur in non-adversarial situations, such as our use of that ability to generate voices for ChatGPT’s advanced voice mode," OpenAI writes. "During testing, we also observed rare instances where the model would unintentionally generate an output emulating the user’s voice."

Read 17 remaining paragraphs | Comments

]]>
https://arstechnica.com/information-technology/2024/08/chatgpt-unexpectedly-began-speaking-in-a-users-cloned-voice-during-testing/feed/ 121
Ars asks: What was the last CD or DVD you burned? https://arstechnica.com/?p=2042118 https://arstechnica.com/staff/2024/08/ars-asks-what-was-the-last-cd-or-dvd-you-burned/#comments Fri, 09 Aug 2024 15:20:06 +0000 https://arstechnica.com/?p=2042118
Photograph of a CD-R disc on fire

Enlarge / This is one method of burning a disc. (credit: 1001slide / Getty Images)

We noted earlier this week that time seems to have run out for Apple's venerable SuperDrive, which was the last (OEM) option available for folks who still needed to read or create optical media on modern Macs. Andrew's write-up got me thinking: When was the last time any Ars staffers actually burned an optical disc?

Lee Hutchinson, Senior Technology Editor

It used to be one of the most common tasks I'd do with a computer. As a child of the '90s, my college years were spent filling and then lugging around giant binders stuffed with home-burned CDs in my car to make sure I had exactly the right music on hand for any possible eventuality. The discs in these binders were all labeled with names like "METAL MIX XVIII" and "ULTRA MIX IV" and "MY MIX XIX," and part of the fun was trying to remember which songs I'd put on which disc. (There was always a bit of danger that I'd put on "CAR RIDE JAMS XV" to set the mood for a Friday night trip to the movies with all the boys, but I should have popped on "CAR RIDE JAMS XIV" because "CAR RIDE JAMS XV" opens with Britney Spears' "Lucky"—look, it's a good song, and she cries in her lonely heart, OK?!—thus setting the stage for an evening of ridicule. Those were just the kinds of risks we took back in those ancient days.)

It took a while to try to figure out what the very last time I burned a disc was, but I've narrowed it down to two possibilities. The first (and less likely) option is that the last disc I burned was a Windows 7 install disc because I've had a Windows 7 install disc sitting in a paper envelope on my shelf for so long that I can't remember how it got there. The label is in my handwriting, and it has a CD key written on it. Some quick searching shows I have the same CD key stored in 1Password with an "MSDN/Technet" label on it, which means I probably downloaded the image from good ol' TechNet, to which I maintained an active subscription for years until MS finally killed the affordable version.

Read 23 remaining paragraphs | Comments

]]>
https://arstechnica.com/staff/2024/08/ars-asks-what-was-the-last-cd-or-dvd-you-burned/feed/ 339
512-bit RSA key in home energy system gives control of “virtual power plant” https://arstechnica.com/?p=2042026 https://arstechnica.com/security/2024/08/home-energy-system-gives-researcher-control-of-virtual-power-plant/#comments Fri, 09 Aug 2024 13:07:30 +0000 https://arstechnica.com/?p=2042026
512-bit RSA key in home energy system gives control of “virtual power plant”

Enlarge

When Ryan Castellucci recently acquired solar panels and a battery storage system for their home just outside of London, they were drawn to the ability to use an open source dashboard to monitor and control the flow of electricity being generated. Instead, they gained much, much more—some 200 megawatts of programmable capacity to charge or discharge to the grid at will. That’s enough energy to power roughly 40,000 homes.

Castellucci, whose pronouns are they/them, acquired this remarkable control after gaining access to the administrative account for GivEnergy, the UK-based energy management provider who supplied the systems. In addition to the control over an estimated 60,000 installed systems, the admin account—which amounts to root control of the company's cloud-connected products—also made it possible for them to enumerate names, email addresses, usernames, phone numbers, and addresses of all other GivEnergy customers (something the researcher didn't actually do).

“My plan is to set up Home Assistant and integrate it with that, but in the meantime, I decided to let it talk to the cloud,” Castellucci wrote Thursday, referring to the recently installed gear. “I set up some scheduled charging, then started experimenting with the API. The next evening, I had control over a virtual power plant comprised of tens of thousands of grid connected batteries.”

Read 16 remaining paragraphs | Comments

]]>
https://arstechnica.com/security/2024/08/home-energy-system-gives-researcher-control-of-virtual-power-plant/feed/ 127
Man vs. machine: DeepMind’s new robot serves up a table tennis triumph https://arstechnica.com/?p=2041804 https://arstechnica.com/information-technology/2024/08/man-vs-machine-deepminds-new-robot-serves-up-a-table-tennis-triumph/#comments Thu, 08 Aug 2024 18:59:59 +0000 https://arstechnica.com/?p=2041804
A blue illustration of a robotic arm playing table tennis.

Enlarge (credit: Benj Edwards / Google DeepMind)

On Wednesday, researchers at Google DeepMind revealed the first AI-powered robotic table tennis player capable of competing at an amateur human level. The system combines an industrial robot arm called the ABB IRB 1100 and custom AI software from DeepMind. While an expert human player can still defeat the bot, the system demonstrates the potential for machines to master complex physical tasks that require split-second decision-making and adaptability.

"This is the first robot agent capable of playing a sport with humans at human level," the researchers wrote in a preprint paper listed on arXiv. "It represents a milestone in robot learning and control."

The unnamed robot agent (we suggest "AlphaPong"), developed by a team that includes David B. D'Ambrosio, Saminda Abeyruwan, and Laura Graesser, showed notable performance in a series of matches against human players of varying skill levels. In a study involving 29 participants, the AI-powered robot won 45 percent of its matches, demonstrating solid amateur-level play. Most notably, it achieved a 100 percent win rate against beginners and a 55 percent win rate against intermediate players, though it struggled against advanced opponents.

Read 10 remaining paragraphs | Comments

]]>
https://arstechnica.com/information-technology/2024/08/man-vs-machine-deepminds-new-robot-serves-up-a-table-tennis-triumph/feed/ 59
Major shifts at OpenAI spark skepticism about impending AGI timelines https://arstechnica.com/?p=2041450 https://arstechnica.com/information-technology/2024/08/major-shifts-at-openai-spark-skepticism-about-impending-agi-timelines/#comments Wed, 07 Aug 2024 19:08:22 +0000 https://arstechnica.com/?p=2041450
The OpenAI logo on a red brick wall.

Enlarge (credit: Benj Edwards / Getty Images)

Over the past week, OpenAI experienced a significant leadership shake-up as three key figures announced major changes. Greg Brockman, the company's president and co-founder, is taking an extended sabbatical until the end of the year, while another co-founder, John Schulman, permanently departed for rival Anthropic. Peter Deng, VP of Consumer Product, has also left the ChatGPT maker.

In a post on X, Brockman wrote, "I'm taking a sabbatical through end of year. First time to relax since co-founding OpenAI 9 years ago. The mission is far from complete; we still have a safe AGI to build."

The moves have led some to wonder just how close OpenAI is to a long-rumored breakthrough of some kind of reasoning artificial intelligence if high-profile employees are jumping ship (or taking long breaks, in the case of Brockman) so easily. As AI developer Benjamin De Kraker put it on X, "If OpenAI is right on the verge of AGI, why do prominent people keep leaving?"

Read 13 remaining paragraphs | Comments

]]>
https://arstechnica.com/information-technology/2024/08/major-shifts-at-openai-spark-skepticism-about-impending-agi-timelines/feed/ 198
Students scramble after security breach wipes 13,000 devices https://arstechnica.com/?p=2041407 https://arstechnica.com/security/2024/08/students-scramble-after-security-breach-wipes-13000-devices/#comments Tue, 06 Aug 2024 21:26:03 +0000 https://arstechnica.com/?p=2041407
Students scramble after security breach wipes 13,000 devices

Enlarge (credit: Getty Images)

Students in Singapore are scrambling after a security breach wiped notes and all other data from school-issued iPads and Chromebooks running the mobile device management app Mobile Guardian.

According to news reports, the mass wiping came as a shock to multiple students in Singapore, where the Mobile Guardian app has been the country’s official mobile device management provider for public schools since 2020. Singapore’s Ministry of Education said Monday that roughly 13,000 students from 26 secondary schools had their devices wiped remotely in the incident. The agency said it will remove the Mobile Guardian from all iPads and Chromebooks it issues.

Second breach in 4 months

Also on Monday, Mobile Guardian revealed its platform had been breached in a “security incident that affected users globally, including on the North America, European, and Singapore instances. This resulted in a small percentage of devices to be unenrolled from Mobile Guardian and their devices wiped remotely. There is no evidence to suggest that the perpetrator had access to users’ data.”

Read 8 remaining paragraphs | Comments

]]>
https://arstechnica.com/security/2024/08/students-scramble-after-security-breach-wipes-13000-devices/feed/ 66
Hang out with Ars in San Jose and DC this fall for two infrastructure events https://arstechnica.com/?p=2037812 https://arstechnica.com/information-technology/2024/08/hang-out-with-ars-in-san-jose-and-dc-this-fall-for-two-infrastructure-events/#comments Tue, 06 Aug 2024 12:50:40 +0000 https://arstechnica.com/?p=2037812
Photograph of servers and racks

Enlarge / Infrastructure!

Howdy, Arsians! Last year, we partnered with IBM to host an in-person event in the Houston area where we all gathered together, had some cocktails, and talked about resiliency and the future of IT. Location always matters for things like this, and so we hosted it at Space Center Houston and had our cocktails amidst cool space artifacts. In addition to learning a bunch of neat stuff, it was awesome to hang out with all the amazing folks who turned up at the event. Much fun was had!

This year, we're back partnering with IBM again and we're looking to repeat that success with not one, but two in-person gatherings—each featuring a series of panel discussions with experts and capping off with a happy hour for hanging out and mingling. Where last time we went central, this time we're going to the coasts—both east and west. Read on for details!

September: San Jose, California

Our first event will be in San Jose on September 18, and it's titled "Beyond the Buzz: An Infrastructure Future with GenAI and What Comes Next." The idea will be to explore what generative AI means for the future of data management. The topics we'll be discussing include:

Read 7 remaining paragraphs | Comments

]]>
https://arstechnica.com/information-technology/2024/08/hang-out-with-ars-in-san-jose-and-dc-this-fall-for-two-infrastructure-events/feed/ 26
Mac and Windows users infected by software updates delivered over hacked ISP https://arstechnica.com/?p=2041175 https://arstechnica.com/security/2024/08/hacked-isp-infects-users-receiving-unsecure-software-updates/#comments Mon, 05 Aug 2024 23:43:06 +0000 https://arstechnica.com/?p=2041175
The words

Enlarge (credit: Marco Verch Professional Photographer and Speaker)

Hackers delivered malware to Windows and Mac users by compromising their Internet service provider and then tampering with software updates delivered over unsecure connections, researchers said.

The attack, researchers from security firm Volexity said, worked by hacking routers or similar types of device infrastructure of an unnamed ISP. The attackers then used their control of the devices to poison domain name system responses for legitimate hostnames providing updates for at least six different apps written for Windows or macOS. The apps affected were the 5KPlayer, Quick Heal, Rainmeter, Partition Wizard, and those from Corel and Sogou.

These aren’t the update servers you’re looking for

Because the update mechanisms didn’t use TLS or cryptographic signatures to authenticate the connections or downloaded software, the threat actors were able to use their control of the ISP infrastructure to successfully perform machine-in-the-middle (MitM) attacks that directed targeted users to hostile servers rather than the ones operated by the affected software makers. These redirections worked even when users employed non-encrypted public DNS services such as Google’s 8.8.8.8 or Cloudflare’s 1.1.1.1 rather than the authoritative DNS server provided by the ISP.

Read 12 remaining paragraphs | Comments

]]>
https://arstechnica.com/security/2024/08/hacked-isp-infects-users-receiving-unsecure-software-updates/feed/ 95
CrowdStrike claps back at Delta, says airline rejected offers for help https://arstechnica.com/?p=2041007 https://arstechnica.com/information-technology/2024/08/crowdstrike-claps-back-at-delta-says-airline-rejected-offers-for-help/#comments Mon, 05 Aug 2024 13:37:15 +0000 https://arstechnica.com/?p=2041007
LOS ANGELES, CALIFORNIA - JULY 23: Travelers from France wait on their delayed flight on the check-in floor of the Delta Air Lines terminal at Los Angeles International Airport (LAX) on July 23, 2024 in Los Angeles, California.

Enlarge / LOS ANGELES, CALIFORNIA - JULY 23: Travelers from France wait on their delayed flight on the check-in floor of the Delta Air Lines terminal at Los Angeles International Airport (LAX) on July 23, 2024 in Los Angeles, California. (credit: Mario Tama/Getty Images)

CrowdStrike has hit back at Delta Air Lines’ threat of litigation against the cyber security company over a botched software update that grounded thousands of flights, denying it was responsible for the carrier’s own IT decisions and days-long disruption.

In a letter on Sunday, lawyers for CrowdStrike argued that the US carrier had created a “misleading narrative” that the cyber security firm was “grossly negligent” in an incident that the US airline has said will cost it $500 million.

Delta took days longer than its rivals to recover when CrowdStrike’s update brought down millions of Windows computers around the world last month. The airline has alerted the cyber security company that it plans to seek damages for the disruptions and hired litigation firm Boies Schiller Flexner.

Read 12 remaining paragraphs | Comments

]]>
https://arstechnica.com/information-technology/2024/08/crowdstrike-claps-back-at-delta-says-airline-rejected-offers-for-help/feed/ 189
FLUX: This new AI image generator is eerily good at creating human hands https://arstechnica.com/?p=2040748 https://arstechnica.com/information-technology/2024/08/flux-this-new-ai-image-generator-is-eerily-good-at-creating-human-hands/#comments Fri, 02 Aug 2024 17:47:26 +0000 https://arstechnica.com/?p=2040748
AI-generated image by FLUX.1 dev:

Enlarge / AI-generated image by FLUX.1 dev: "A beautiful queen of the universe holding up her hands, face in the background." (credit: FLUX.1)

On Thursday, AI-startup Black Forest Labs announced the launch of its company and the release of its first suite of text-to-image AI models, called FLUX.1. The German-based company, founded by researchers who developed the technology behind Stable Diffusion and invented the latent diffusion technique, aims to create advanced generative AI for images and videos.

The launch of FLUX.1 comes about seven weeks after Stability AI's troubled release of Stable Diffusion 3 Medium in mid-June. Stability AI's offering faced widespread criticism among image-synthesis hobbyists for its poor performance in generating human anatomy, with users sharing examples of distorted limbs and bodies across social media. That problematic launch followed the earlier departure of three key engineers from Stability AI—Robin Rombach, Andreas Blattmann, and Dominik Lorenz—who went on to found Black Forest Labs along with latent diffusion co-developer Patrick Esser and others.

Black Forest Labs launched with the release of three FLUX.1 text-to-image models: a high-end commercial "pro" version, a mid-range "dev" version with open weights for non-commercial use, and a faster open-weights "schnell" version ("schnell" means quick or fast in German). Black Forest Labs claims its models outperform existing options like Midjourney and DALL-E in areas such as image quality and adherence to text prompts.

Read 9 remaining paragraphs | Comments

]]>
https://arstechnica.com/information-technology/2024/08/flux-this-new-ai-image-generator-is-eerily-good-at-creating-human-hands/feed/ 162
Senators propose “Digital replication right” for likeness, extending 70 years after death https://arstechnica.com/?p=2040488 https://arstechnica.com/information-technology/2024/08/senates-no-fakes-act-hopes-to-make-unauthorized-digital-replicas-illegal/#comments Thu, 01 Aug 2024 17:45:17 +0000 https://arstechnica.com/?p=2040488
A stock photo illustration of a person's face lit with pink light.

Enlarge (credit: Maria Korneeva via Getty Images)

On Wednesday, US Sens. Chris Coons (D-Del.), Marsha Blackburn (R.-Tenn.), Amy Klobuchar (D-Minn.), and Thom Tillis (R-NC) introduced the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act of 2024. The bipartisan legislation, up for consideration in the US Senate, aims to protect individuals from unauthorized AI-generated replicas of their voice or likeness.

The NO FAKES Act would create legal recourse for people whose digital representations are created without consent. It would hold both individuals and companies liable for producing, hosting, or sharing these unauthorized digital replicas, including those created by generative AI. Due to generative AI technology that has become mainstream in the past two years, creating audio or image media fakes of people has become fairly trivial, with easy photorealistic video replicas likely next to arrive.

In a press statement, Coons emphasized the importance of protecting individual rights in the age of AI. "Everyone deserves the right to own and protect their voice and likeness, no matter if you're Taylor Swift or anyone else," he said, referring to a widely publicized deepfake incident involving the musical artist in January. "Generative AI can be used as a tool to foster creativity, but that can't come at the expense of the unauthorized exploitation of anyone's voice or likeness."

Read 11 remaining paragraphs | Comments

]]>
https://arstechnica.com/information-technology/2024/08/senates-no-fakes-act-hopes-to-make-unauthorized-digital-replicas-illegal/feed/ 179
Cloudflare once again comes under pressure for enabling abusive sites https://arstechnica.com/?p=2040424 https://arstechnica.com/security/2024/07/cloudflare-once-again-comes-under-pressure-for-enabling-abusive-sites/#comments Wed, 31 Jul 2024 23:22:54 +0000 https://arstechnica.com/?p=2040424
Cloudflare once again comes under pressure for enabling abusive sites

Enlarge (credit: Getty Images)

A familiar debate is once again surrounding Cloudflare, the content delivery network that provides a free service that protects websites from being taken down in denial-of-service attacks by masking their hosts: Is Cloudflare a bastion of free speech or an enabler of spam, malware delivery, harassment and the very DDoS attacks it claims to block?

The controversy isn't new for Cloudflare, a network operator that has often taken a hands-off approach to moderating the enormous amount of traffic flowing through its infrastructure. With Cloudflare helping deliver 16 percent of global Internet traffic, processing 57 million web requests per second, and serving anywhere from 7.6 million to 15.7 million active websites, the decision to serve just about any actor, regardless of their behavior, has been the subject of intense disagreement, with many advocates of free speech and Internet neutrality applauding it and people fighting crime and harassment online regarding it as a pariah.

Content neutral or abuse enabling?

Spamhaus—a nonprofit organization that provides intelligence and blocklists to stem the spread of spam, phishing, malware, and botnets—has become the latest to criticize Cloudflare. On Tuesday, the project said Cloudflare provides services for 10 percent of the domains listed in its domain block list and, to date, serves sites that are the subject of more than 1,200 unresolved complaints regarding abuse.

Read 16 remaining paragraphs | Comments

]]>
https://arstechnica.com/security/2024/07/cloudflare-once-again-comes-under-pressure-for-enabling-abusive-sites/feed/ 103
ChatGPT Advanced Voice Mode impresses testers with sound effects, catching its breath https://arstechnica.com/?p=2040213 https://arstechnica.com/information-technology/2024/07/when-counting-quickly-openais-new-voice-mode-stops-to-catch-its-breath/#comments Wed, 31 Jul 2024 18:14:18 +0000 https://arstechnica.com/?p=2040213
Stock Photo: AI Cyborg Robot Whispering Secret Or Interesting Gossip

Enlarge / A stock photo of a robot whispering to a man. (credit: AndreyPopov via Getty Images)

On Tuesday, OpenAI began rolling out an alpha version of its new Advanced Voice Mode to a small group of ChatGPT Plus subscribers. This feature, which OpenAI previewed in May with the launch of GPT-4o, aims to make conversations with the AI more natural and responsive. In May, the feature triggered criticism of its simulated emotional expressiveness and prompted a public dispute with actress Scarlett Johansson over accusations that OpenAI copied her voice. Even so, early tests of the new feature shared by users on social media have been largely enthusiastic.

In early tests reported by users with access, Advanced Voice Mode allows them to have real-time conversations with ChatGPT, including the ability to interrupt the AI mid-sentence almost instantly. It can sense and respond to a user's emotional cues through vocal tone and delivery, and provide sound effects while telling stories.

But what has caught many people off-guard initially is how the voices simulate taking a breath while speaking.

Read 13 remaining paragraphs | Comments

]]>
https://arstechnica.com/information-technology/2024/07/when-counting-quickly-openais-new-voice-mode-stops-to-catch-its-breath/feed/ 83