"But they knew in their hearts that once science had declared a thing possible, there was no escape from its eventual realization..."
25518 stories
·
16 followers

Android’s AirTag competitor gears up for launch, thanks to iOS release

1 Share
Pebblebee's Android trackers.

Enlarge / Pebblebee's Android trackers. (credit: Pebblebee)

Will Google ever launch its "Find My" network? The Android ecosystem was supposed to have its own version of Apple's AirTags by now. Google has had a crowd-sourced device-tracking network sitting dormant on 3 billion Android phones since December 2022. Partners have been ready to go with Bluetooth tag hardware since May 2023! This was all supposed to launch a year ago, but Google has been in a holding pattern. The good news is we're finally seeing some progress after a year of silence.

The reason for Google's lengthy delay is actually Apple. A week before Google's partners announced their Android network Bluetooth tags, Google and Apple jointly announced a standard to detect "unknown" Bluetooth trackers and show users alerts if their phone thinks they're being stalked. Since you can constantly see an AirTag's location, they can be used for stalking by just covertly slipping one into a bag or car; nobody wants that, so everyone's favorite mobile duopoly is teaming up.

Google did its half of this partnership and rolled out AirTag detection in July 2023. At the same time, Google also announced: "We’ve made the decision to hold the rollout of the Find My Device network until Apple has implemented protections for iOS." Surely Apple would be burning the midnight oil to launch iOS Android tag detection as soon as possible so that Google could start competing with AirTags.

Read 6 remaining paragraphs | Comments

Read the whole story
zipcube
8 days ago
reply
Dallas, Texas
Share this story
Delete

$158,000 ALS drug pulled from market after failing in large clinical trial

1 Share
$158,000 ALS drug pulled from market after failing in large clinical trial

Enlarge (credit: Amlyx)

Amylyx, the maker of a new drug to treat ALS, is pulling that drug from the market and laying off 70 percent of its workers after a large clinical trial found that the drug did not help patients, according to an announcement from the company Thursday.

The drug, Relyvrio, won approval from the Food and Drug Administration in September 2022 to slow the progression of ALS (amyotrophic lateral sclerosis, or Lou Gehrig's disease). However, the data behind the controversial decision was shaky at best; it was based on a study of just 137 patients that had several weaknesses and questionable statistical significance, and FDA advisors initially voted against approval. Still, given the severity of the neurogenerative disease and lack of effective treatments, the FDA ultimately granted approval under the condition that the company was working on a Phase III clinical trial to solidify its claimed benefits.

Relyvrio—a combination of two existing, generic drugs—went on the market with a list price of $158,000.

Read 4 remaining paragraphs | Comments

Read the whole story
zipcube
8 days ago
reply
Dallas, Texas
Share this story
Delete

Report: Israel used AI to identify bombing targets in Gaza

1 Share
Photo collage showing a crosshair over a destroyed building in Gaza.
Image: Cath Virginia / The Verge | Photo from Getty Images

Israel’s military has been using artificial intelligence to help choose its bombing targets in Gaza, sacrificing accuracy in favor of speed and killing thousands of civilians in the process, according to an investigation by Israel-based publications +972 Magazine and Local Call.

The system, called Lavender, was developed in the aftermath of Hamas’ October 7th attacks, the report claims. At its peak, Lavender marked 37,000 Palestinians in Gaza as suspected “Hamas militants” and authorized their assassinations.

Israel’s military denied the existence of such a kill list in a statement to +972 and Local Call. A spokesperson told CNN that AI was not being used to identify suspected terrorists but did not dispute the existence of the Lavender system, which the spokesperson described as “merely tools for analysts in the target identification process.” Analysts “must conduct independent examinations, in which they verify that the identified targets meet the relevant definitions in accordance with international law and additional restrictions stipulated in IDF directives,” the spokesperson told CNN. The Israel Defense Forces did not immediately respond to The Verge’s request for comment.

In interviews with +972 and Local Call, however, Israeli intelligence officers said they weren’t required to conduct independent examinations of the Lavender targets before bombing them but instead effectively served as “a ‘rubber stamp’ for the machine’s decisions.” In some instances, officers’ only role in the process was determining whether a target was male.

Choosing targets

To build the Lavender system, information on known Hamas and Palestinian Islamic Jihad operatives was fed into a dataset — but, according to one source who worked with the data science team that trained Lavender, so was data on people loosely affiliated with Hamas, such as employees of Gaza’s Internal Security Ministry. “I was bothered by the fact that when Lavender was trained, they used the term ‘Hamas operative’ loosely, and included people who were civil defense workers in the training dataset,” the source told +972.

Lavender was trained to identify “features” associated with Hamas operatives, including being in a WhatsApp group with a known militant, changing cellphones every few months, or changing addresses frequently. That data was then used to rank other Palestinians in Gaza on a 1–100 scale based on how similar they were to the known Hamas operatives in the initial dataset. People who reached a certain threshold were then marked as targets for strikes. That threshold was always changing “because it depends on where you set the bar of what a Hamas operative is,” one military source told +972.

The system had a 90 percent accuracy rate, sources said, meaning that about 10 percent of the people identified as Hamas operatives weren’t members of Hamas’ military wing at all. Some of the people Lavender flagged as targets just happened to have names or nicknames identical to those of known Hamas operatives; others were Hamas operatives’ relatives or people who used phones that had once belonged to a Hamas militant. “Mistakes were treated statistically,” a source who used Lavender told +972. “Because of the scope and magnitude, the protocol was that even if you don’t know for sure that the machine is right, you know statistically that it’s fine. So you go for it.”

Collateral damage

Intelligence officers were given wide latitude when it came to civilian casualties, sources told +972. During the first few weeks of the war, officers were allowed to kill up to 15 or 20 civilians for every lower-level Hamas operative targeted by Lavender; for senior Hamas officials, the military authorized “hundreds” of collateral civilian casualties, the report claims.

Suspected Hamas operatives were also targeted in their homes using a system called “Where’s Daddy?” officers told +972. That system put targets generated by Lavender under ongoing surveillance, tracking them until they reached their homes — at which point, they’d be bombed, often alongside their entire families, officers said. At times, however, officers would bomb homes without verifying that the targets were inside, wiping out scores of civilians in the process. “It happened to me many times that we attacked a house, but the person wasn’t even home,” one source told +972. “The result is that you killed a family for no reason.”

AI-driven warfare

Mona Shtaya, a non-resident fellow at the Tahrir Institute for Middle East Policy, told The Verge that the Lavender system is an extension of Israel’s use of surveillance technologies on Palestinians in both the Gaza Strip and the West Bank.

Shtaya, who is based in the West Bank, told The Verge that these tools are particularly troubling in light of reports that Israeli defense startups are hoping to export their battle-tested technology abroad.

Since Israel’s ground offensive in Gaza began, the Israeli military has relied on and developed a host of technologies to identify and target suspected Hamas operatives. In March, The New York Times reported that Israel deployed a mass facial recognition program in the Gaza Strip — creating a database of Palestinians without their knowledge or consent — which the military then used to identify suspected Hamas operatives. In one instance, the facial recognition tool identified Palestinian poet Mosab Abu Toha as a suspected Hamas operative. Abu Toha was detained for two days in an Israeli prison, where he was beaten and interrogated before being returned to Gaza.

Another AI system, called “The Gospel,” was used to mark buildings or structures that Hamas is believed to operate from. According to a +972 and Local Call report from November, The Gospel also contributed to vast numbers of civilian casualties. “When a 3-year-old girl is killed in a home in Gaza, it’s because someone in the army decided it wasn’t a big deal for her to be killed — that it was a price worth paying in order to hit [another] target,” a military source told the publications at the time.

“We need to look at this as a continuation of the collective punishment policies that have been weaponized against Palestinians for decades now,” Shtaya said. “We need to make sure that war times are not used to justify the mass surveillance and mass killing of people, especially civilians, in places like Gaza.”

Read the whole story
zipcube
8 days ago
reply
Dallas, Texas
Share this story
Delete

Roku’s idea of showing ads on your HDMI inputs seems like an inevitable hell

1 Share
Vector collage of the Roku logo.
Illustration: The Verge

In this week’s edition of his Lowpass newsletter, Janko Roettgers covered a Roku patent that seems to telegraph that the company is planning some heavy advertising tactics for those who purchase Roku TV televisions. The patent centers around the idea of displaying ads on these TVs whenever they’re tuned to an HDMI input that’s paused or idle. Theoretically, this would allow Roku to present ads throughout your whole TV experience — and in places where it’s not viable to do so today. Your PS5, Xbox, Apple TV, or Blu-ray player could become yet another canvas for the company to continue growing its already-lucrative advertising business.

According to the patent, the company would use a number of different clues to determine when an HDMI source is paused; the Roku TV could wait for extended audio silence or simply analyze the onscreen frames to gauge when movement has stopped, among other approaches. And the patent mentions using automatic content recognition (ACR) to detect what you’re watching on an Apple TV (or playing on a console) to present with relevant ads. ACR is nothing new and one of those things that many of us agree to when quickly going through a new TV’s initial setup.

Obviously, it’d be very easy for Roku to massively screw this up, interrupt your entertainment, and outrage customers. And a patent itself is no guarantee that this ads-on-every-HDMI-input concept will become reality. But it does follow a recent trend of streaming box (and stick) makers pushing right up against the line of what consumers are willing to tolerate — and testing whether they can quietly move the goalposts. Even Microsoft is dabbling with the same.

A few months ago, Amazon began automatically playing trailers on Fire TV devices right at startup if a user took no immediate action. The move sure did piss a lot of people off — but apparently not to enough of a level for the company to revert the change. You can avoid the autoplaying ads by disabling them in settings, but even then, sometimes you’ll see full-screen image banner slideshows.

I had a very strong “they can’t be serious with this” reaction to the immediate ads and sought comment from Amazon. Spokesperson Madison Daniels told me the following:

We’re constantly looking for more ways to help customers discover new TV shows and movies on Fire TV and ads are one way we do that. Our most recent update to the Fire TV home screen means customers will start on the Learn More button of one of our most popular placements to discover something great to watch.

Isn’t discoverability the very purpose of the homescreen itself? I digress. Not long after that, a Chromecast user spotted this full-screen ad for chicken tender wraps from Carls Jr. Does the wrap look delicious? Absolutely. But this goes a step beyond the typical (and I’d say expected / acceptable) type of ads that we’re used to seeing. Sponsored “recommendations” for movies and shows have become quite common across TV platforms and streaming software. But a chicken wrap? C’mon.

The inescapable truth is that ads help to subsidize the cost of these streaming players, some of which can be purchased for under $30. But you can also spend $100 more than that on a Fire TV Cube, and you’ll be getting blasted with the same autoplaying ads as someone who bought the cheapest model. That’s a perfect example of where this ham-fisted advertising really rubs me the wrong way. What’s the point of getting the premium thing?

This is why I almost always advise people to just spend the extra money on an Apple TV 4K. The reprieve from drowning in ads is well worth it. There are ways to circumvent ads on other devices, whether it’s Pi-hole, alternate launchers (on Android streamers), and more. But those are extra steps that most people will never take. And for them, the outlook keeps getting more bleak.

I hope that Roku doesn’t implement the ideas laid out in this patent covered by Lowpass. Roku TVs are often good! They’re dependable, get a long road of software updates, and feel instantly familiar to many people right out of the box. And I’m looking forward to checking out how a Roku Pro TV compares with today’s impressive Mini LED competition from TCL, Hisense, and more. But I’m not confident that the company won’t keep us speeding down this trajectory of getting ads in front of eyeballs at all costs. Even if Roku doesn’t, it seems like only a matter of time before another TV brand takes the worst kind of inspiration from this patent.

Read the whole story
zipcube
8 days ago
reply
Dallas, Texas
Share this story
Delete

Claims of TikTok whistleblower may not add up

1 Share
TikTok logo next to inverted US flag.

Enlarge (credit: SOPA Images | LightRocket | Getty Images)

The United States government is currently poised to outlaw TikTok. Little of the evidence that convinced Congress the app may be a national security threat has been shared publicly, in some cases because it remains classified. But one former TikTok employee turned whistleblower, who claims to have driven key news reporting and congressional concerns about the app, has now come forward.

Zen Goziker worked at TikTok as a risk manager, a role that involved protecting the company from external security and reputational threats. In a wrongful termination lawsuit filed against TikTok's parent company ByteDance in January, he alleges he was fired in February 2022 for refusing “to sign off” on Project Texas, a $1.5 billion program that TikTok designed to assuage US government security concerns by storing American data on servers managed by Oracle.

Read 22 remaining paragraphs | Comments

Read the whole story
zipcube
8 days ago
reply
Dallas, Texas
Share this story
Delete

OpenAI transcribed over a million hours of YouTube videos to train GPT-4

1 Share
Photo illustration of the shape of a brain on a circuitboard.
Cath Virginia / The Verge | Photos from Getty Images

Earlier this week, The Wall Street Journal reported that AI companies were running into a wall when it comes to gathering high-quality training data. Today, The New York Times detailed some of the ways companies have dealt with this. Unsurprisingly, it involves doing things that fall into the hazy gray area of AI copyright law.

The story opens on OpenAI which, desperate for training data, reportedly developed its Whisper audio transcription model to get over the hump, transcribing over a million hours of YouTube videos to train GPT-4, its most advanced large language model. That’s according to The New York Times, which reports that the company knew this was legally questionable but believed it to be fair use. OpenAI president Greg Brockman was personally involved in collecting videos that were used, the Times writes.

OpenAI spokesperson Lindsay Held told The Verge in an email that the company curates “unique” datasets for each of its models to “help their understanding of the world” and maintain its global research competitiveness. Held added that the company uses “numerous sources including publicly available data and partnerships for non-public data,” and that it’s looking into generating its own synthetic data.

The Times article says that the company exhausted supplies of useful data in 2021, and discussed transcribing YouTube videos, podcasts, and audiobooks after blowing through other resources. By then, it had trained its models on data that included computer code from Github, chess move databases, and schoolwork content from Quizlet.

Google spokesperson Matt Bryant told The Verge in an email the company has “seen unconfirmed reports” of OpenAI’s activity, adding that “both our robots.txt files and Terms of Service prohibit unauthorized scraping or downloading of YouTube content,” echoing the company’s terms of use. YouTube CEO Neal Mohan said similar things about the possibility that OpenAI used YouTube to train its Sora video-generating model this week. Bryant said Google takes “technical and legal measures” to prevent such unauthorized use “when we have a clear legal or technical basis to do so.”

Google also gathered transcripts from YouTube, according to the Times’ sources. Bryant said that the company has trained its models “on some YouTube content, in accordance with our agreements with YouTube creators.”

The Times writes that Google’s legal department asked the company’s privacy team to tweak its policy language to expand what it could do with consumer data, such as its office tools like Google Docs. The new policy was reportedly intentionally released on July 1st to take advantage of the distraction of the Independence Day holiday weekend.

Meta likewise bumped against the limits of good training data availability, and in recordings the Times heard, its AI team discussed its unpermitted use of copyrighted works while working to catch up to OpenAI. The company, after going through “almost available English-language book, essay, poem and news article on the internet,” apparently considered taking steps like paying for book licenses or even buying a large publisher outright. It was also apparently limited in the ways it could use consumer data by privacy-focused changes it made in the wake of the Cambridge Analytica scandal.

Google, OpenAI, and the broader AI training world are wrestling with quickly-evaporating training data for their models, which get better the more data they absorb. The Journal wrote this week that companies may outpace new content by 2028.

Possible solutions to that problem mentioned by the Journal on Monday include training models on “synthetic” data created by their own models or so-called “curriculum learning,” which involves feeding models high-quality data in an ordered fashion in hopes that they can use make “smarter connections between concepts” using far less information, but neither approach is proven, yet. But the companies’ other option is using whatever they can find, whether they have permission or not, and based on multiple lawsuits filed in the last year or so, that way is, let’s say, more than a little fraught.

Read the whole story
zipcube
8 days ago
reply
Dallas, Texas
Share this story
Delete
Next Page of Stories