X will soon close its longtime San Francisco office and move employees to offices elsewhere in the Bay Area, according to an email from CEO Linda Yaccarino reported byTheNew York Times. Yaccarino’s note to employees comes several weeks after Elon Musk threatened to move X’s headquarters out of California and into Austin, Texas.
Yaccarino’s note, however, doesn’t seem to mention Texas. According to The New York Times, she told employees the closure will happen over the “next few weeks” and that employees will work out of “a shared engineering space in Palo Alto” that’s also used by xAI, as well as other “locations in San Jose.”
Twitter, and now X, has had a rocky relationship with its home base since Musk’s takeover of the company. Musk banned employees from working remotely shortly after taking over the company in 2022, and ordered many Twitter workers back to the office in the mid-Market neighborhood of San Francisco.
He later ran afoul of the city’s Department of Building Inspection for installing a giant flashing X on top of the building, and for reportedly converting office space into hotel rooms for employees to sleep in. The company’s landlord had also sued X over unpaid rent, The San Francisco Chronicle reported earlier this year. The lawsuit was later dismissed.
Despite Musk’s frequent complaints about San Francisco and its elected leaders, he had previously vowed to keep the company’s headquarters in the city. “Many have offered rich incentives for X (fka Twitter) to move its HQ out of San Francisco,” Musk tweeted last year.
“Moreover, the city is in a doom spiral with one company after another left or leaving. Therefore, they expect X will move too. We will not. You only know who your real friends are when the chips are down. San Francisco, beautiful San Francisco, though others forsake you, we will always be your friend.”
But, even before Musk’s recent posts about moving to Austin, there were other signs X may be getting ready to leave after all. The San Francisco Chroniclereported in July that X’s landlord was looking to sublease much of the company’s 800,000 square-foot headquarters.
X didn’t immediately respond to a request for comment.
This article originally appeared on Engadget at https://www.engadget.com/social-media/x-is-reportedly-closing-its-san-francisco-office-203650428.html?src=rss
OpenAI will give the US AI Safety Institute early access to its next model as part of its safety efforts, Sam Altman has revealed in a tweet. Apparently, the company has been working with the consortium "to push forward the science of AI evaluations." The National Institute of Standards and Technology (NIST) has formally established the Artificial Intelligence Safety Institute earlier this year, though Vice President Kamala Harris announced it back in 2023 at the UK AI Safety Summit. Based on the NIST's description of the consortium, it's meant "to develop science-based and empirically backed guidelines and standards for AI measurement and policy, laying the foundation for AI safety across the world."
The company, along with DeepMind, similarly pledged to share AI models with the UK government last year. As TechCrunch notes, there have been growing concerns that OpenAI is making safety less of a priority as it seeks to develop more powerful AI models. There were speculations that the board decided to kick Sam Altman out of the company — he was very quickly reinstated — due to safety and security concerns. However, the company told staff members in an internal memo back then, that it was because of "a breakdown in communication."
In May this year, OpenAI admitted that it disbanded the Superalignment team it created to ensure that humanity remains safe as the company advances its work on generative artificial intelligence. Before that, OpenAI co-founder and Chief Scientist Ilya Sutskever, who was one of the team's leaders, left the company. Jan Leike, who was also one of the team's leaders, quit, as well. He said in a series of tweets that he had been disagreeing with OpenAI's leadership about the core priorities of the company for quite some time and that "safety culture and processes have taken a backseat to shiny products." OpenAI created a new safety group by the end of May, but it's led by board members that include Altman, prompting concerns about self-policing.
a few quick updates about safety at openai:
as we said last july, we’re committed to allocating at least 20% of the computing resources to safety efforts across the entire company.
our team has been working with the US AI Safety Institute on an agreement where we would provide…
This article originally appeared on Engadget at https://www.engadget.com/openai-vows-to-provide-the-us-government-early-access-to-its-next-ai-model-110017697.html?src=rss
TechCrunch reported that a group of researchers from the university KU Leuven in Belgium identified six popular dating apps that malicious users can use to pinpoint the near-exact location of other users. Dating apps including Hinge, Happn, Bumble, Grindr, Badoo and Hily all exhibited some form of “trilateration” that could expose users’ approximate locations, which prompted some of the apps to take action and tighten their security, according to the published paper.
The term “trilateration” refers to a three-point measurement used in GPS to determine the relative distance to a target. The six named apps fell into one of three categories of trilateration” including “exact distance trilateration” in which a target is accurate to “at least a 111m by 111m square (at the equator),” “round distance trilateration” or “oracle trilateration” in which distance filters are used to approximate a rounded area much like a Venn diagram.
Grindr is “susceptible to exact distance trilateration” while Happn falls under “rounded distance trilateration.” The remaining four fall under “oracle trilateration” despite the fact that Hinge and Hily hide the distances of its users, according to the paper.
Karel Dhondt, one of the researchers involved in the study, told TechCrunch that a user with malicious intent could locate another user up to “2 meters” away using oracle trilateration. This method involves the malicious user going to a rough estimate of the victim's location based on their profile and moving in increments until the victim is no longer in proximity along three different positions and triangulating the data to one spot.
Bumble’s vice president of global communication Gabrielle Ferree told the website that they “swiftly resolved the issues outlined” with its distance filter last year. Hily co-founder and chief technology officer Dmytro Kononov said in a statement that an investigation revealed “a potential possibility for trilateration” but “exploiting this for attacks was impossible.”
Happn chief executive officer and president Karima Ben Adelmalek told TechCrunch they discussed trilateration with the Belgian researchers. He says that an additional layer of protection designed to prevent trilateration “was not taken into account in their analysis.”
Grindr’s chief privacy officer Kelly Peterson Miranda noted that users can disable their distance display from their profile. She also noted that “Grindr users are in control of what location information they provide.” Hinge did not respond with a comment.
Other dating apps have taken extra steps to ensure its users are speaking to actual people and not spam bots or fake accounts. Tinder started requiring users in February in the US, UK, Brazil and Mexico to upload a copy of an official driver’s license or passport along with a video selfie as part of a new advanced ID verification system.
Update, July 31, 7:55PM ET: The story was updated to remove the statement that Badoo did not respond to a request for comment. As Badoo is owned by Bumble, Bumble VP Gabrielle Ferree's statement covers both brands.
This article originally appeared on Engadget at https://www.engadget.com/belgian-researchers-found-a-huge-privacy-hole-in-six-dating-apps-223227855.html?src=rss
As with any overly marketed products, the claims around virtual private networks (VPNs) can be fishy. Phrases like “military-grade encryption” or “total anonymity” aren’t verifiable, and certainly won’t help you decide which services suit your browsing needs best. As more of these companies embrace influencer marketing to sell their products, the obscure lingo has only grown, making it a confusing field to navigate, despite VPNs’ importance for online security. We tested nine popular VPNs to demystify the market and help you figure out which are the best VPNs available today.
VPNs, or virtual private networks, mask your IP address and the identity of your computer or mobile device on the network and creating an encrypted "tunnel" that prevents your internet service provider (ISP) from accessing data about your browsing history. VPNs are not a one-size-fits-all security solution, though.
Instead, they’re just one part of keeping your data private and secure. Roya Ensafi, assistant professor of computer science and engineering at the University of Michigan, told Engadget that VPNs don’t protect against common threats like phishing attacks, nor do they protect your data from being stolen. Much of the data or information is stored with the VPN provider instead of your ISP, which means that using a poorly designed or unprotected network can still undermine your security. But they do come in handy for online privacy when you’re connecting to an untrusted network somewhere public because they tunnel and encrypt your traffic to the next hop.
That means sweeping claims that seem promising, like military-grade encryption or total digital invisibility, may not be totally accurate. Instead, Yael Grauer, program manager of Consumer Reports’ online security guide, recommends looking for security features like open-source software with reproducible builds, up-to-date support for industry-standard protocols like WireGuard (CR's preferred protocol) or IPsec, and the ability to defend against attack vectors like brute force.
Understanding VPNs and your needs
Before considering a VPN, make sure your online security is up to date in other ways. That means complex passwords, multi-factor authentication methods and locking down your data sharing preferences. Even then, you probably don’t need to be using a VPN all the time.
“If you're just worried about somebody sitting there passively and looking at your data then a VPN is great,” Jed Crandall, an associate professor at Arizona State University, told Engadget.
That brings us to some of the most common uses cases for VPNs. If you use public WiFi networks a lot, like while working at a coffee shop, then VPN usage can help give you private internet access. They’re also helpful for hiding information from other people on your ISP if you don’t want members of your household to know what you’re up to online.
Geoblocking has also become a popular use case as it helps you reach services in other parts of the world. For example, you can access shows that are only available on streaming services, like Netflix, Hulu or Amazon Prime, in other countries, or play online games with people located all over the globe.
There are also a few common VPN features that you should consider before deciding if you want to use one, and which is best for you:
What is split tunneling?
Split tunneling allows you to route some traffic through your VPN, while other traffic has direct access to the internet. This can come in handy when you want to protect certain activity online without losing access to local network devices, or services that work best with location sharing enabled.
What is a double VPN?
A double VPN, otherwise known as multi-hop VPN or a VPN chain, passes your online activity through two different VPN servers one right after the other. For VPN services that support this, users are typically able to choose which two servers they want their traffic to pass through. As you might expect, this provides an extra layer of security.
Are VPNs worth it?
Whether or not VPNs are worth it depends how often you could use it for the above use cases. If you travel a lot and rely on public WiFi or hotspots, are looking to browse outside of your home country or want to keep your traffic hidden from your ISP, then investing in a VPN will be useful. But, keep in mind that even the best VPN services often slow down your internet connection speed, so they may not be ideal all the time.
In today's world, we recommend not relying on a VPN connection as your main cybersecurity tool. VPN use can provide a false sense of security, leaving you vulnerable to attack. Plus, if you choose just any VPN, it may not be as secure as just relying on your ISP. That’s because the VPN could be based in a country with weaker data privacy regulation, obligated to hand information over to law enforcement or linked to weak user data protection policies.
For VPN users working in professions like activism or journalism that want to really strengthen their internet security, options like the Tor browser may be a worthwhile alternative, according to Crandall. Tor is free, and while it's less user-friendly, it’s built for anonymity and privacy.
How we tested VPNs
To test the security specs of different VPNs and name our top picks, we relied on pre-existing academic work through Consumer Reports, VPNalyzer and other sources. We referenced privacy policies, transparency reports and security audits made available to the public. We also considered past security incidents like data breaches.
We looked at price, usage limits, effects on internet speed, possible use cases, ease of use, general functionality and additional “extra” VPN features like multihop. The VPNs were tested across iOS, Android and Mac devices so we could see the state of the mobile apps across various platforms (Windows devices are also supported in most cases). We used the “quick connect” feature on the VPN apps to connect to the “fastest” provider available when testing internet speed, access to IP address data and DNS and WebRTC leaks or when a fault in the encrypted tunnel reveals requests to an ISP.
Otherwise, we conducted a test of geoblocking content by accessing Canada-exclusive Netflix releases, a streaming test by watching a news livestream on YouTube via a Hong Kong-based VPN and a gaming test by playing on servers in the United Kingdom. By performing these tests at the same time, it also allowed us to test claims about simultaneous device use. Here are the VPN services we tested:
NordVPN didn’t quite make the cut because it’s overhyped, and underwhelming. As I've written in our full review of NordVPN, the pricing, up to $14.49 for a “complete” subscription, seemed high compared to other services, and its free or lower cost plans just didn’t have the same wide variety of features as its competitors.
TunnelBear
Despite the cute graphics and user friendliness, TunnelBear wasn’t a top choice. It failed numerous basic security tests from Consumer Reports, and had limited availability across platforms like Linux. It did, however, get a major security boost in July when it updated to support WireGuard protocol across more of its platforms.
Bitdefender VPN
Bitdefender doesn’t offer support for devices like routers, which limits its cross-platform accessibility. It also lacked a transparency report or third-party audit to confirm security specs.
Atlas VPN
Atlas ranked lower on our speed tests compared to the other VPNs tested, with a notably slower difference on web browsing and streaming tests. It was a good option otherwise, but could easily cause headaches for those chasing high speed connections. Security-wise, an Atlas VPN vulnerability leaked Linux users’ real IP addresses.
VPN FAQs
What are some things VPNs are used for?
VPNs are traditionally used to protect your internet traffic. If you’re connected to an untrusted network like public WiFi in a cafe, using a VPN hides what you do from the internet service provider. Then, the owner of the WiFi or hackers trying to get into the system can’t see the identity of your computer or your browsing history.
A common non-textbook use case for VPNs has been accessing geographically restricted content. VPNs can mask your location, so even if you’re based in the United States, they can make it appear as if you’re browsing abroad and unblock access. This is especially useful for streaming content that’s often limited to certain countries, like if you want to watch Canadian Netflix from the US.
What information does a VPN hide?
A VPN doesn’t hide all of your data. It only hides information like your IP address, location and browser history. A common misconception is that VPNs can make you totally invisible online. But keep in mind that the VPN provider often still has access to all of this information, so it doesn’t grant you total anonymity. You’re also still vulnerable to phishing attacks, hacking and other cyberthreats that you should be mindful of by implementing strong passwords and multi-factor authentication.
Are VPNs safe?
Generally, yes. VPNs are a safe and reliable way to encrypt and protect your internet data. But like most online services, the safety specifics vary from provider to provider. You can use resources like third-party audits, Consumer Reports reviews, transparency reports and privacy policies to understand the specifics of your chosen provider.
What about Google’s One VPN?
Google One subscriptions include access to the company’s VPN, which works similarly to other VPNs on our list, hiding your online activity from network operators. However, Google announced recently that it plans to shut down the One VPN because "people simply weren’t using it." There's no specific date for the shutdown, with Google simply saying it will discontinue the service sometime later in 2024. Pixel phone owners, however, will continue to have access to the free VPN available on their devices.
Recent updates
June 2024: Updated to include table of contents.
November 2023: This story was updated after publishing to remove mention of PPTP, a protocol that Consumer Reports' Yael Grauer notes "has serious security flaws."
This article originally appeared on Engadget at https://www.engadget.com/best-vpn-130004396.html?src=rss
We've known for a while that Amazon is planning to soup up Alexa with generative AI features. While the company says it has been integrating that into various aspects of the voice assistant, it's also working on a more advanced version of Alexa that it plans to charge users to access. Amazon has reportedly dubbed the higher tier "Remarkable Alexa" (let's hope it doesn't stick with that name for the public rollout).
According to Reuters, Amazon is still determining pricing and a release date for Remarkable Alexa, but it has mooted a fee of between roughly $5 and $10 per month for consumers to use it. Amazon is also said to have been urging its workers to have Remarkable Alexa ready by August — perhaps so it's able to discuss the details as its usual fall Alexa and devices event.
This will mark the first major revamp of Alexa since Amazon debuted the voice assistant alongside Echo speakers a decade ago. The company is now in a position where it's trying to catch up with the likes of ChatGPT and Google Gemini. Amazon CEO Andy Jassy, who pledged that the company was working on a “more intelligent and capable Alexa" in an April letter to shareholders, has reportedly taken a personal interest in the overhaul. Jassy noted last August that every Amazon division had generative AI projects in the pipeline.
"We have already integrated generative AI into different components of Alexa, and are working hard on implementation at scale — in the over half a billion ambient, Alexa-enabled devices already in homes around the world — to enable even more proactive, personal, and trusted assistance for our customers," said an Amazon spokeswoman told Reuters. However, the company has yet to deploy the more natural-sounding and conversational version of Alexa it showed off last September.
Remarkable Alexa is said to be capable of complex prompts, such as being able to compose and send an email, and order dinner all from a single command. Deeper personalization is another aspect, while Amazon reportedly expects that consumers will use it for shopping advice, as with its Rufus assistant.
Upgraded home automation capability is said to be a priority too. According to the report, Remarkable Alexa may be able to gain a deeper understanding of user preferences, so it might learn to turn on the TV to a specific show. It may also learn to turn on the coffee machine when your alarm clock goes off (though it's already very easy to set this up through existing smart home systems).
Alexa has long been an unprofitable endeavor for Amazon — late last year, it laid off several hundred people who were working on the voice assistant. It's not a huge surprise that the company would try to generate more revenue from Remarkable Alexa (which, it's claimed, won't be offered as a Prime benefit). Users might need to buy new devices with more powerful tech inside so that Remarkable Alexa can run on them properly.
In any case, $10 (or even $5) per month for an upgraded voice assistant seems like a hard sell, especially when the current free version of Alexa can already handle a wide array of tasks.
This article originally appeared on Engadget at https://www.engadget.com/amazon-reportedly-thinks-people-will-pay-up-to-10-per-month-for-next-gen-alexa-152205672.html?src=rss
Twitch signed up cyberbullying experts, web researchers and community members back in 2020 to form the Safety Advisory Council. The review board was formed to help it draft new policies, develop products that improve safety and protect the interests of marginalized groups. Now, CNBC reports that the streaming website has terminated all the members of the council. Twitch reportedly called the nine members into a meeting on May 6 to let them know that their existing contracts would end on May 31 and that they would not be getting paid for the second half of 2024.
The Safety Advisory Council's members include Dr. Sameer Hinduja, co-director of the Cyber Bullying Research Center, and Dr. T.L. Taylor, the co-founder and director of AnyKey, an organization that advocates for inclusion and diversity in video games and esports. There's also Emma Llansó, the director of the Free Expression Project for the Center for Democracy and Technology.
In an email sent to the members, Twitch reportedly told them that going forward, "the Safety Advisory Council will primarily be made up of individuals who serve as Twitch Ambassadors." The Amazon subsidiary didn't mention any names, but it describes its Ambassadors as people who "positively contribute to the Twitch community — from being role models for their community, to establishing new content genres, to having inspirational stories that empower those around them."
In a statement sent to The Verge, Twitch trust and safety communications manager Elizabeth Busby said that the new council members will "offer [the website] fresh, diverse perspectives" after working with the same core members for years. "We’re excited to work with our global Twitch Ambassadors, all of whom are active on Twitch, know our safety work first hand, and have a range of experiences to pull from," Busby added.
It's unclear if the Ambassadors taking the current council members' place will get paid or if they're expected to lend their help to the company for free. If it's the latter, then this development could be a cost-cutting measure: The outgoing members were paid between $10,000 and $20,000 a year, CNBC says. Back in January, Twitch also laid off 35 percent of its workforce to "cut costs" and to "build a more sustainable business." In the same month, it reduced how much streamers make from every Twitch Prime subscription they generate, as well.
This article originally appeared on Engadget at https://www.engadget.com/twitch-removes-every-member-of-its-safety-advisory-council-131501219.html?src=rss
OpenAI said that it stopped five covert influence operations that used its AI models for deceptive activities across the internet. These operations, which OpenAI shutdown between 2023 and 2024, originated from Russia, China, Iran and Israel and attempted to manipulate public opinion and influence political outcomes without revealing their true identities or intentions, the company said on Thursday. “As of May 2024, these campaigns do not appear to have meaningfully increased their audience engagement or reach as a result of our services,” OpenAI said in a report about the operation, and added that it worked with people across the tech industry, civil society and governments to cut off these bad actors.
OpenAI’s report comes amidst concerns about the impact of generative AI on multiple elections around the world slated for this year including in the US. In its findings, OpenAI revealed how networks of people engaged in influence operations have used generative AI to generate text and images at much higher volumes than before, and fake engagement by using AI to generate fake comments on social media posts.
“Over the last year and a half there have been a lot of questions around what might happen if influence operations use generative AI,” Ben Nimmo, principal investigator on OpenAI’s Intelligence and Investigations team, told members of the media in a press briefing, according to Bloomberg. “With this report, we really want to start filling in some of the blanks.”
OpenAI said that the Russian operation called “Doppelganger”, used the company’s models to generate headlines, convert news articles to Facebook posts, and create comments in multiple languages to undermine support for Ukraine. Another Russian group used used OpenAI's models to debug code for a Telegram bot that posted short political comments in English and Russian, targeting Ukraine, Moldova, the US, and Baltic States. The Chinese network "Spamouflage," known for its influence efforts across Facebook and Instagram, utilized OpenAI's models to research social media activity and generate text-based content in multiple languages across various platforms. The Iranian "International Union of Virtual Media" also used AI to generate content in multiple languages.
OpenAI’s disclosure is similar to the ones that other tech companies make from time to time. On Wednesday, for instance, Meta released its latest report on coordinated inauthentic behavior detailing how an Israeli marketing firm had used fake Facebook accounts to run an influence campaign on its platform that targeted people in the US and Canada.
This article originally appeared on Engadget at https://www.engadget.com/openai-says-it-stopped-multiple-covert-influence-operations-that-abused-its-ai-models-225115466.html?src=rss
OpenAI and Reddit announced a partnership on Thursday that will allow OpenAI to surface Reddit discussions in ChatGPT and for Reddit to bring AI-powered features to its users. The partnership will “enable OpenAI’s tools to better understand and showcase Reddit content, especially on recent topics,” both companies said in a joint statement. As part of the agreement, OpenAI will also become an advertising partner on Reddit, which means that it will run ads on the platform.
The deal is similar to the one that Reddit signed with Google in February, and which is reportedly worth $60 million. A Reddit spokesperson declined to disclose the terms of the OpenAI deal to Engadget and OpenAI did not respond to a request for comment.
OpenAI has been increasingly striking partnerships with publishers to get data to continue training its AI models. In the last few weeks alone, the company has signed deals with the Financial Times and Dotdash Meredith. Last year, it also partnered with German publisher Axel Springer to train its models on news from Politico and Business Insider in the US and Bild and Die Welt in Germany.
Under the new arrangement, OpenAI will get access to Reddit’s Data API, which, the company said, will provide it with “real time, structured, and unique content from Reddit.” It’s not clear what AI-powered features Reddit will build into its platform as a result of the partnership. A Reddit spokesperson declined to comment.
Last year, getting access to Reddit’s data, a rich source of real time, human generated, and often high-quality information, became a contentious issue after the company announced that it would start charging developers to use its API. As a result, dozens of third-party Reddit clients were forced to shut down and thousands of subreddits went dark in protest. At the time, Reddit stood its ground and said that large AI companies were scraping its data with no payment. Since then, Reddit has been monetizing its data by striking such deals with Google and OpenAI, whose progress in training their AI models depends on having access to it.
This article originally appeared on Engadget at https://www.engadget.com/openai-strikes-deal-to-put-reddit-posts-in-chatgpt-224133045.html?src=rss
YouTube said it would comply with an order blocking access to videos of Hong Kong’s protest anthem inside the region, according to The Guardian. The platform’s decision comes after an appeals court banned the protest song “Glory to Hong Kong,” which the largely China-controlled government (predictably) framed as a national security threat.
Alphabet, YouTube and Google’s parent company, followed its familiar playbook of legally complying with court orders undermining human rights while issuing statements puffing up its advocacy for them. “We are disappointed by the Court’s decision but are complying with its removal order,” YouTube’s statement to The Guardian said. “We’ll continue to consider our options for an appeal, to promote access to information.”
Alphabet reportedly told the outlet the block would take effect immediately inside the region. It added that it shares the concerns of human rights groups that it could deal a blow to online freedoms.
YouTube reportedly said links to the videos will eventually no longer be visible in Google Search inside Hong Kong. I tried using a Hong Kong-based VPN server while in the US, and the videos were still viewable on Thursday morning. However, The Guardian said attempts to view it from inside the region show the message, “This content is not available on this country domain due to a court order.”
This article originally appeared on Engadget at https://www.engadget.com/youtube-reportedly-agrees-to-block-videos-of-hong-kongs-protest-song-inside-the-region-174245129.html?src=rss
Chinese tech giant Huawei has been secretly funding research in America despite being blacklisted, as reported by Bloomberg. The cutting-edge research is happening at universities, including Harvard, and the money is being funneled through an independent Washington-based research foundation, along with a competition for scientists.
Bloomberg found that Huawei was the sole funder of a research competition that has awarded millions of dollars since 2022 and attracted hundreds of proposals from scientists. Some of these scientists are at top US universities that have banned researchers from working with the company.
What’s the big deal? The fear is that this research could lead to innovations that give China a leg up with regard to both defense contracting and commercial interests, according to Kevin Wolf, a partner at the business-focused law firm Akin who specializes in export controls. Optica, the foundation behind all of this, has posted online that it is interested in “high-sensitivity optical sensors and detectors," among other categories of research.
“It’s a bad look for a prestigious research foundation to be anonymously accepting money from a Chinese company that raises so many national security concerns for the US government,” said James Mulvenon, a defense contractor who has worked on research security issues and co-authored several books on industrial espionage.
It’s worth noting that this money funneling operation doesn’t look to be illegal, as research intended for publication doesn’t fall under the purview of the ban. Huawei operates similar competitions in other parts of the world, though openly. People who participated in the US-based research competition didn’t even know that Huawei was involved, believing the money to come from Optica. The competition awards $1 million per year and Optica didn’t give any indication that Huawei was supplying the cash.
A Huawei spokesperson told Bloomberg that the company and the Optica Foundation created the competition to support global research and promote academic communication, saying that it remained anonymous to keep from being seen as a promotion of some kind. Optica’s CEO, Liz Rogan, said in a statement that many foundation donors “prefer to remain anonymous” and that “there is nothing unusual about this practice.” She also said that the entire board knew about Huawei’s involvement and that everyone signed off on it. Bloomberg did note that the Huawei-backed competition was the only one on Optica’s website that didn’t list individual and corporate financial sponsors.
Some expected President Biden to reverse Trump’s executive order when it expired in 2021, but he headed in the opposite direction. Not only does the order stand, but Biden signed a law that blocked Huawei from obtaining an FCC license and he banned American investments in China’s high tech industries. We aren’t cozying up to China anytime soon, so Huawei will continue to be persona non grata on this side of the pond (the company still does booming business in Europe.)
This article originally appeared on Engadget at https://www.engadget.com/huawei-has-been-secretly-funding-research-in-america-after-being-blacklisted-182020402.html?src=rss
Proton Mail has introduced Dark Web Monitoring for its paid users, which will keep them informed of breaches or leaks they may have been affected by. If anything's been spotted on the dark web, the feature will send out alerts that include information like what service was compromised, what personal details the attackers got (e.g. passwords, name, etc.) and recommended next steps. At launch, you’ll have to visit the Proton Mail Security Center on the web or desktop to access these alerts, but the company says email and in-app notifications are on the way.
Dark Web Monitoring is intended to be a proactive security measure. If you’ve used your Proton Mail email address to sign up for a third-party service, like a social media site, and then hackers steal user data from that service, it would let you know in a timely manner if your credentials have been compromised so you can take action (hopefully) before any harm is done. It seems a fitting move for the service, which already offers end-to-end encryption and has made privacy its main stance since the beginning. Dark Web Monitoring won’t be available to free users, though.
“While data breaches of third-party sites leading to the leak of personal information (such as your email address) can never be entirely avoided, automated early warning can help users stay vigilant and mitigate worse side effects such as identity theft,” said Eamonn Maguire, Head of Anti-Abuse and Account Security at Proton.
This article originally appeared on Engadget at https://www.engadget.com/proton-mails-paid-users-will-now-get-alerts-if-their-info-has-been-posted-on-the-dark-web-100057504.html?src=rss
The European Union doesn't think you should have to choose between giving Meta and other major players your data or your money. In a statement, the European Data Protection Board (EDPB) stated that "consent or pay" models often don't "comply with the requirements for valid consent" when a person must choose between providing their data for behavioral advertising purposes or pay for privacy.
The EDPB argues that only offering a paid alternative to data collection shouldn't be the default for large online platforms. It doesn't issue a mandate but stresses that these platforms should "give significant consideration" to providing a free option that doesn't involve data processing (or at least not as much). "Controllers should take care at all times to avoid transforming the fundamental right to data protection into a feature that individuals have to pay to enjoy," EDPB Chair Anu Talus said. "Individuals should be made fully aware of the value and the consequences of their choices."
Currently, EU users must pay €10 ($11) monthly for an ad-free subscription or be forced to share their data. The EU is already investigating if this system complies with the Digital Markets Act, which went into effect at the beginning of March.
This article originally appeared on Engadget at https://www.engadget.com/eu-criticizes-metas-privacy-for-cash-business-model-103042528.html?src=rss
Google has fired a Cloud engineer who interrupted Barak Regev, the managing director of its business in Israel, during a speech at an Israeli tech event in New York, according to CNBC. "I'm a Google software engineer and I refuse to build technology that powers genocide or surveillance!" the engineer was seen and heard shouting in a video captured by freelance journalist Caroline Haskins that went viral online. While being dragged away by security — and amidst jeers from the audience — he continued talking and referenced Project Nimbus. That's the $1.2 billion contract Google and Amazon had won to supply AI and other advanced technologies to the Israeli military.
Last year, a group of Google employees published an open letter urging the company to cancel Project Nimbus, in addition to calling out the "hate, abuse and retaliation" Arab, Muslim and Palestinian workers are getting within the company. "Project Nimbus puts Palestinian community members in danger! I refuse to build technology that is gonna be used for cloud apartheid," the engineer said. After he was removed from the venue, Regev told the audience that "[p]art of the privilege of working in a company, which represents democratic values is giving the stage for different opinions." He ended his speech after a second protester interrupted and accused Google of being complicit in genocide.
A Google Cloud engineer just interrupted Google Israel managing director Barak Regev at Israeli tech industry conference MindTheTech this morning in NY.
“I refuse to build technology that powers genocide!” he yelled, referring to Google’s Project Nimbus contract pic.twitter.com/vM9mMFlJRS
The incident took place during the MindTheTech conference in New York. Its theme for the year was apparently "Stand With Israeli Tech," because investments in Israel slowed down after the October 7 Hamas attacks. Haskins wrote a detailed account of what she witnessed at the event, but she wasn't able to stay until it wrapped up, because she was also thrown out by security.
The Google engineer who interrupted the event told Haskins that he wanted "other Google Cloud engineers to know that this is what engineering looks like — is standing in solidarity with the communities affected by your work." He spoke to the journalist anonymously to avoid professional repercussions, but Google clearly found out who he was. A Google spokesperson told CNBC that he was fired for "interfering with an official company-sponsored event." They also told the news organization that his "behavior is not okay, regardless of the issue" and that the "employee was terminated for violating [Google's] policies."
This article originally appeared on Engadget at https://www.engadget.com/google-fires-engineer-who-protested-at-a-company-sponsored-israeli-tech-conference-090430890.html?src=rss
Russian hackers keep trying to infiltrate Microsoft, the company revealed in a blog post. These hacks follow a similar incident from November of last year, in which state-sponsored agents obtained the emails of Microsoft’s senior level managers. An internal investigation led by Microsoft identified the hackers in both instances as a Russian group called Midnight Blizzard.
It looks like Midnight Blizzard has gotten bolder in its approach. Last year’s attack seemed to prioritize the collection of email addresses, but this most recent attack finds the group repeatedly attempting to breach the company’s systems and gain access to source code. Microsoft has filed an incident report with the U.S. Securities and Exchange Commission.
We don’t know exactly what these hackers want, but Microsoft said they are likely using email addresses acquired during November’s attack to help gain access to internal systems. Midnight Blizzard “may be using the information it has obtained to accumulate a picture of areas towhich led to a breach of government networks. attack and enhance its ability to do so,” the company wrote. I know one thing. They had better leave Clippy alone.
Midnight Blizzard is believed to work directly for Russia’s Foreign Intelligence Service (SVR) and is said to operate at the behest of Vladimir Putin. The group is likely behind 2016’s hack of the Democratic National Committee and 2020’s hack of the software company SolarWinds, which led to a breach of government networks.
This article originally appeared on Engadget at https://www.engadget.com/russian-state-sponsored-hackers-keep-trying-to-infiltrate-microsoft-162706062.html?src=rss
It’s 1995, and I’m trying to watch a video on the internet. I entered the longest, most complex URL I’d ever seen into AOL’s web browser to view a trailer for Paul W.S. Anderson’s long-awaited film adaptation of Mortal Kombat. I found it in an issue of Electronic Gaming Monthly, tucked away in the bottom of a full-page ad for the film. Online marketing at the time was such an afterthought, studios didn’t even bother grabbing short and memorable web addresses for their major releases, let alone dedicated websites. (Star Trek Generations and Stargate were among the few early exceptions.)
After the interminable process of transcribing the URL from print, I gathered my family around our Packard Bell PC (powered by an Intel 486 DX and, let’s say, 8MB of RAM), hit return and waited as the video slowly came down our 33.6kbps dial-up connection. And waited. It took 25 minutes for it to fully load. After corralling my family once again, I hit play and was treated to an horrendously compressed, low-resolution version of the trailer I’d been dreaming about for months. It was unwatchable. The audio was shit. But that was the moment I became obsessed with online video.
I imagined a futuristic world beyond my boxy CRT set and limited cable TV subscription. A time after VHS tapes when I could just type in a URL and enjoy a show or movie while eating one of those rehydrated Pizza Hut pies from Back to the Future 2. The internet would make it so.
Looking back now, almost 30 years later, and 20 years after Engadget sprung to life, I realize my 11-year-old self was spot on. The rise of online video transformed the internet from a place where we’d browse the web, update our LiveJournals, steal music and chat with friends on AIM to a place where we could also just sit back and relax. For Millennials, it quickly made our computer screens more important than our TVs. What I didn’t expect, though, was that streaming video would also completely upend Hollywood and the entire entertainment industry.
If my experience with the Mortal Kombat trailer didn’t make it clear enough, video was a disaster on the internet in the ’90s. Most web surfers (as we were known as the time) were stuck with terribly slow modems and similarly unimpressive desktop systems. But really, the problem goes back to dealing with video on computers.
Apple’s Quicktime format made Macs the ideal platform for multimedia creators, and, together with its Hypercard software for creating interactive multimedia databases, it spawned the rise of Myst and the obsession with mixed-media educational software. PCs relied on MPEG-1, which debuted in 1993 and was mainly for VCDs and some digital TV providers. The problem with both formats was space: Hard drives were notoriously small and expensive at the time, which made CDs the main option for accessing any sort of video on your computer. If your computer only had a 500MB hard drive, a slim disc that could store 650MB seemed like magic.
But that also meant video had no place in the early internet. RealPlayer was the first true stab at delivering streaming video and audio online — and while it was better than waiting 20 minutes for a huge file to download, it was still hard to actually stream media when you were constrained by a dial-up modem. I remember seeing buffering alerts more than I did any actual RealPlayer content. It took the proliferation of broadband internet access and one special app from Adobe to make web video truly viable.
While we may curse its name today, it’s worth remembering how vital Macromedia Flash was to the web in the early 2000s. (We’ve been around long enough to cover Adobe’s acquisition of Macromedia in 2005!) Its support for vector graphics, stylized text and simple games injected new life into the internet, and it allowed just about anyone to create that content. HTML just wasn’t enough. Ask any teen or 20-something who was online at the time, and they could probably still recite most of The End of the World by heart.
With 2002’s Flash MX 6, Macromedia added support for Sorenson’s Spark video codec, which opened the floodgates for online video. (It was eventually replaced in 2005 by the VP6 codec from On2, a company Google acquired in 2009.) Macromedia’s video offering looked decent, loaded quickly and was supported on every browser that had the Flash plugin, making it the ideal player choice for video websites.
The adult entertainment industry latched onto Flash video first, as you’d expect. Porn sites also relied on the technology to lock down purchased videos and entice viewers to other sites with interactive ads. But it was YouTube (and, to a lesser extent, Vimeo) that truly showed mainstream users what was possible with video on the internet. After launching in February 2005, YouTube grew so quickly it was serving 100 million videos a day by July 2006, making up 60 percent of all online videos at the time. It’s no wonder Google rushed to acquire the company for $1.65 billion later that year (arguably the search giant’s smartest purchase ever).
After YouTube’s shockingly fast rise, it wasn’t too surprising to see Netflix announce its own Watch Now streaming service in 2007, which also relied on Flash for video. At $17.99 a month for 18 hours of video, with a library of only 1,000 titles, Netflix’s streaming offering didn’t seem like much of a threat to Blockbuster, premium cable channels or cinemas at first. But the company wisely expanded Watch Now to all Netflix subscribers in 2008 and removed any viewing cap: The Netflix binge was born.
It’s 2007, and I’m trying to watch a video on the internet. In my post-college apartment, I hooked up my desktop computer to an early-era (720p) Philips HDTV, and all of a sudden, I had access to thousands of movies, instantly viewable over a semi-decent cable connection. I didn’t need to worry about seeding torrents or compiling Usenet files (things I’d only heard about from dirty pirates, you see). I didn’t have to stress about any Blockbuster late fees. The movies were just sitting on my TV, waiting for me to watch them. It was the dream for digital media fanatics: Legal content available at the touch of a button. What a concept!
Little did I know then that the Watch Now concept would basically take over the world. Netflix initially wanted to create hardware to make the service more easily accessible, but it ended up spinning off that idea, and Roku was born. The company’s streaming push also spurred on the creation of Hulu, announced in late 2007 as a joint offering between NBCUniversal and News Corp. to bring their television shows online. Disney later joined, giving Hulu the full power of all the major broadcast TV networks. Instead of a stale library of older films, Hulu allowed you to watch new shows on the internet the day after they aired. Again, what a concept!
Amazon, it turns out, was actually earlier to the streaming party than Netflix. It launched the Amazon Unbox service in 2006, which was notable for letting you watch videos as they were being downloaded onto your computer. It was rebadged to Amazon Video On Demand in 2008 (a better name, which actually described what it did), and then it became Amazon Instant Video in 2011, when it was tied together with premium Prime memberships.
As the world of streaming video exploded, Flash’s reputation kept getting worse. By the mid-2000s, it was widely recognized as a notoriously buggy program, one so insecure it could lead to malware infecting your PC. (I worked in IT at the time, and the vast majority of issues I encountered on Windows PCs stemmed entirely from Flash.) When the iPhone launched without support for Flash in 2007, it was clear the end was near. YouTube and other video sites moved over to HTML5 video players at that point, and it became the standard by 2015.
By the early 2010s, YouTube and Amazon weren’t happy just licensing content from Hollywood, they wanted some of the action themselves. So the original programming boom began, which kicked off with mostly forgettable shows (anyone remember Netflix’s Lillyhammer or Amazon’s Alpha House? Hemlock Grove? They existed, I swear!).
But then came House of Cards in 2013, Netflix’s original series created by playwright Beau Willimon, executive produced (and partially directed) by renowned filmmaker David Fincher and starring Oscar winner Kevin Spacey (before he was revealed to be a monster). It had all of the ingredients of a premium TV show, and, thanks to Fincher’s deft direction, it looked like something that would be right at home on HBO. Most importantly for Netflix, it got some serious awards love, earning nine Emmy nominations in 2013 and walking away with three statues.
By that point, we could watch streaming video in many more places than our computer’s web browser. You could pull up just about anything on your phone and stream it over 4G LTE, or use your smart TV’s built-in apps to catch up on SNL over Hulu. Your Xbox could also serve as the centerpiece of your home entertainment system. And if you wanted the best possible streaming experience, you could pick up an Apple TV or Roku box. You could start a show on your phone while sitting on the can, then seamlessly continue it when you made your way back to your TV. This was certainly some sort of milestone for humanity, though I’m torn on it actually being a net win for our species.
Instant streaming video. Original TV shows and movies. This was the basic formula that pushed far too many companies to offer their own streaming solutions over the past decade. In the blink of an eye, we got HBO Max, Disney+, Apple TV+, Peacock, and Paramount+. There’s AMC+, powered almost entirely by the promise of unlimited Walking Dead shows. A Starz streaming service. And there are countless other companies trying to be a Netflix for specific niches, like Shudder for horror, Criterion Channel for cinephiles and Britbox for the tea-soaked murder-mystery crowd.
And let’s not forget the wildest, most boneheaded streaming swing: Quibi. That was Dreamworks mastermind Jeffrey Katzenberg’s nearly $2 billion mobile video play. Somehow he and his compatriots thought people would pay $5 a month for the privilege of watching videos on their phones, even though YouTube was freely available.
Every entertainment company thinks it can be as successful as Disney, which has a vast and beloved catalog of content as well as full control of Lucasfilm and Marvel’s properties. But, realistically, there aren’t enough eyeballs and willing consumers for every streaming service to succeed. Some will die off entirely, while others will bring their content to Netflix and more popular services (like Paramount is doing with Star Trek Prodigy). There are already early rumors of Comcast (NBCUniversal’s parent company) and Paramount considering some sort of union between Peacock and Paramount+.
Online video was supposed to save us from the tyranny of expensive and chaotic cable bills, and despite the messiness of the arena today, that’s still mostly true. Sure, if you actually wanted to subscribe to most of the major streaming services, you’d still end up paying a hefty chunk of change. But hey, at least you can cancel at will, and you can still choose precisely what you’re paying for. Cable would never.
It’s 2024, and I’m trying to watch a video on the internet. I slip on the Apple Vision Pro, a device that looks like it could have been a prop for The Matrix. I launch Safari in a 150-inch window floating above my living room and watch the Mortal Kombat trailer on YouTube. That whole process takes 10 seconds. I never had the chance to see the trailer or the original film in the theater. But thanks to the internet (and Apple’s crazy expensive headset), I can replicate that experience.
Perhaps that’s why, no matter how convoluted and expensive streaming video services become, I’ll always think: At least it’s better than watching this thing over dial-up.
To celebrate Engadget's 20th anniversary, we're taking a look back at the products and services that have changed the industry since March 2, 2004.
This article originally appeared on Engadget at https://www.engadget.com/streaming-video-changed-the-internet-forever-170014082.html?src=rss
Meta and LG have partnered up to “expedite” the former company’s extended reality (XR) business. What does that mean exactly? We don’t know, but Meta’s current VR/XR business is fairly robust, with the recent release of the Quest 3 headset.
LG says the ultimate goal of the partnership is “to combine the strengths of both companies across products, content, services and platforms to drive innovation in customer experiences within the burgeoning virtual space.”
Meta CEO Mark Zuckerberg traveled to LG’s headquarters in Seoul to announce the collaboration. During this visit, LG CEO William Cho tried out the Quest 3 and the recently-released Ray-Ban Meta smart glasses. The business leaders discussed “business strategies and considerations for next-gen XR device development.” LG’s CEO also seemed to take a particularly keen interest in Meta’s large language models and the potential to further integrate AI into standalone devices.
As stated above, we don’t know exactly what this partnership will entail. LG says it hopes to bring together “Meta’s platform with its own content/service capabilities” from its TV business. That sounds pretty boring, but LG also said the partnership will combine “Meta’s diverse core technological elements with LG’s cutting-edge product and quality capabilities.”
This leads to the lens-shaped elephant in the room. Meta XR and VR devices require displays and LG makes displays. It could be just that simple. After all, even Apple relied on Sony for the micro-OLED displays inside of the Vision Pro headset.
This news follows LG creating a dedicated XR business unit last year, which was founded to “accelerate the pursuit of new ventures in the virtual space arena.” This led to rumors that the company was planning to launch its own VR/XR headset, which could still happen.
This article originally appeared on Engadget at https://www.engadget.com/meta-partners-up-with-lg-to-expedite-its-extended-reality-ventures-163251353.html?src=rss
Video doorbells manufactured by a Chinese company called Eken and sold under different brands for around $30 each come with serious security issues that put their users at risk, according to Consumer Reports. The publication found that these doorbell cameras are sold on popular marketplaces like Walmart, Sears and Amazon, which has even given some of their listings the Amazon Choice badge. They're listed under the brands Eken, Tuck, Fishbot, Rakeblue, Andoe, Gemee and Luckwolf, among others, and they're typically linked to a user's phone via the Aiwit app. Outside the US, the devices are sold on global marketplaces like Shein and Temu. We found them on Chinese website Alibaba and Southeast Asian e-commerce website Lazada, as well.
Based on Consumer Reports' investigation, these devices aren't encrypted and can expose a user's home IP address and WiFi network name to the internet, making it easy for bad actors to gain entry. Worse, somebody with physical access to the doorbell could easily take control of it by creating an account on the Aiwit app and then pressing down on its button to put it into pairing mode, which then connects it with their phone. And, even if the original owner regains control, the hijacker can still get time-stamped images from the doorbell as long as they know its serial number. If they choose "to share that serial number with other individuals, or even post it online, all those people will be able to monitor the images, too," Consumer Reports explains.
Based on the ratings these doorbells' listings got on Amazon, the platform has sold thousands to people who were probably expecting the devices to be able to provide some form of security for their homes. Instead, the devices pose a threat to their safety and privacy. The doorbells could even put people's well-being and lives at risk if, say, they have stalkers or are domestic violence victims with dangerous exes who want to follow their every move.
People who own one of these video doorbells can protect themselves by disconnecting it from their WiFi and physically removing it from their homes. Consumer Reports said it notified the online marketplaces selling them about its findings in hopes that their listings would get pulled down. Temu told the publication that it's looking into the issue, but Amazon, Sears and Shein reportedly didn't even respond.
This article originally appeared on Engadget at https://www.engadget.com/surprise-this-30-video-doorbell-has-serious-security-issues-130630193.html?src=rss
The Federal Trade Commission (FTC) concluded Elon Musk ordered Twitter (now X) employees to take actions that would have violated an FTC consent decree regarding consumers’ data privacy and security. The investigation arose from the late 2022 episode informally known as “The Twitter Files,” where Musk ordered staff to let outside writers access internal documents from the company’s systems. However, the FTC says Twitter security veterans “took appropriate measures to protect consumers’ private information,” likely sparing Musk’s company from government repercussions by ignoring his directive.
FTC Chair Lina Khan discussed the conclusions in a public letter sent Tuesday to House Judiciary Committee Chair Jim Jordan, as reported byThe Washington Post. Jordan and his Republican colleagues have tried to turn the FTC’s investigation into a political wedge issue, framing the inquiry as a free speech violation — perhaps to shore up GOP support from Musk’s legion of rabid supporters. Jordan and his peers previously described the investigation as “attempts to harass, intimidate, and target an American business.”
Khan’s response to Jordan adopts a tone resembling that of a patient teacher explaining the nuance of a complicated situation to a child who insists on seeing simplistic absolutes. “FTC staff efforts to ensure Twitter was in compliance with the Order were appropriate and necessary, especially given Twitter’s history of privacy and security lapses and the fact that it had previously violated the 2011 FTC Order,” Khan wrote.
“When a firm has a history of repeat offenses, the FTC takes particular care to ensure compliance with its orders,” she continued.
The FTC’s investigation stemmed from allegations that Musk, newly minted as Twitter’s owner, ordered staff to give outside writers “full access to everything” in late 2022. Had staff obeyed Musk’s directive, the company likely would have violated a settlement with the FTC (originally from 2011 but updated in 2022) requiring the company to tightly restrict access to consumer data.
In November 2022, the FTC said publicly it was monitoring Twitter’s developments following Musk’s acquisition with “deep concern.” That followed the resignation of chief information security officer Lea Kissner and other members of the company’s data governance committee. They expressed concerns that Musk’s launch of a new account verification system didn’t give them adequate time to deploy security reviews required by the FTC.
Ultimately, Twitter security veterans ignored Musk’s “full access to everything” order. “Longtime information security employees at Twitter intervened and implemented safeguards to mitigate the risks,” Khan wrote in the letter. “The FTC’s investigation confirmed that staff was right to be concerned, given that Twitter’s new CEO had directed employees to take actions that would have violated the FTC’s Order.”
Rather than supplying outside writers with the “full access” Musk wanted them to have, Twitter employees accessed the systems and relayed select information to the group of outsiders. “Ultimately the third-party individuals did not receive direct access to Twitter’s systems, but instead worked with other company employees who accessed the systems on the individuals’ behalf,” Khan wrote.
The FTC says it will continue to monitor X’s adherence to the order. “When we heard credible public reports of potential violations of protections for Twitter users’ data, we moved swiftly to investigate,” FTC spokesman Douglas Farrar said in a statement to The Washington Post. “The order remains in place and the FTC continues to deploy the order’s tools to protect Twitter users’ data and ensure the company remains in compliance.”
This article originally appeared on Engadget at https://www.engadget.com/ftc-concludes-twitter-didnt-violate-data-security-rules-in-spite-of-musks-orders-191917132.html?src=rss
Ever posted or left a comment on Reddit? Your words will soon be used to train an artificial intelligence companies' models, according to Bloomberg. The website signed a deal that's "worth about $60 million on an annualized basis" earlier this year, it reportedly told potential investors ahead of its expected initial public offering (IPO). Bloomberg didn't name the "large AI company" that's paying Reddit millions for access to its content, but their agreement could apparently serve as a model for future contracts, which could mean more multi-million deals for the firm.
Reddit first announced that it was going to start charging companies for API access in April last year. It said at the time that pricing will be split in tiers so that even smaller clientele could afford to pay. Companies need that API access to be able to train their chatbots on posts and comments — a lot of which had been written by real people over the past 18 years — from subreddits on a wide variety of topics. However, that API is also used by other developers, including those providing users with third-party clients that are arguably better than Reddit's official app. Thousands of communities shut down last year in protest and even caused stability issues that affected the whole website.
Reddit could go public as soon as next month with a $5 billion valuation. As Bloomberg notes, the website could convince investors still on fence to take the leap by showing them that it can make big money and grow its revenue through deals with AI companies. The firms behind generative AI technologies are working to update their large language models or LLMs through various partnerships, after all. OpenAI, for instance, already inked an agreement that would give it the right to use Business Insider and Politico articles to train its AI models. It's also in talks with several publishers, including CNN, Fox Corp and Time,Bloomberg says.
OpenAI is facing several lawsuits that accuse it of using content without the express permission of copyright holders, though, including one filed by The New York Timesin December. The AI company previously told Engadget that the lawsuit was unexpected, because it had ongoing "productive conversations" with the publication for a "high-value partnership."
This article originally appeared on Engadget at https://www.engadget.com/reddit-reportedly-signed-a-multi-million-content-licensing-deal-with-an-ai-company-124516009.html?src=rss