FreshRSS

Zobrazení pro čtení

Jsou dostupné nové články, klikněte pro obnovení stránky.

Google’s Pixel 9 ‘Pro’ Fold rebrand is just an attempt to cover up spec mediocrity

Opinion post by
Robert Triggs

Before countless leaks, we all expected Google’s next-gen foldable to be called the Pixel Fold 2. That would be a logical name for the successor to the Google Pixel Fold, after all. Instead, we already have a rebrand — the Pixel 9 Pro Fold aligns the new foldable with the broader Pixel 9 series. It’s a bit odd, though, least of all because it obscures which foldable generation Google is in. Maybe that’s the point?

There’s no clear-cut rule for naming products, of course. Samsung is content with an S and Z distinction between its classic and foldable phones. OPPO has the Find N series for its foldables compared to the traditional Find X flagship range. However, Motorola bundles all of its best phones under the Razr moniker, and HONOR’s Magic series accounts for all its flagships, foldable or not. Still, Google’s sudden about-face is harder to explain. Does it want us to believe the Pixel 9 Pro Fold is just a foldable version of the 9 Pro? Because it really isn’t.

Despite the name, the Pixel 9 Pro Fold is not just a foldable version of the 9 Pro.

Perhaps the most exciting change to the range is that the compact Pixel 9 Pro and larger 9 Pro XL sport the very best smartphone technology Google has to offer. The Pro moniker designates feature parity, providing Google’s best performance, camera suite, storage options, build quality and protection, and all the other capabilities you’d expect from a premium flagship. However, that promise only applies to Google’s non-foldable phones.

The Fold dilutes this Pro tag. As is typical for foldables, you trade down water and dust resistance from an IP68 rating to IPX8, meaning no protection against dust ingress. If you’re a heavy media user, you can’t buy the Fold with more than 512GB storage (but at least it comes with 256GB minimum); only the non-foldable Pros come in a 1TB configuration. The 6.3-inch external display isn’t as good as the regular Pixel 9 Pro either; it has a lower resolution, lower peak and HDR brightness, and only drops from 120Hz to 60Hz rather than as low as 1Hz to save power on static content.

The Fold is slower to charge too; it’s capable of just 21W of power versus 27W on the Pro and 37W on the XL. The wireless charging situation is even worse. The Fold is capped at a bog standard 7.5W, far off the 21W available to the 8 Pro via the Pixel Stand (2nd gen) and even slower than the 15W Pixel Stand and 12W Qi charging available to the baseline Pixel 9. The Fold doesn’t support battery share either, so it can’t be used to power up your other flagging gadgets.

Key DowngradesPixel 9 ProPixel 9 Pro Fold
Storage128GB, 256GB, 512GB, 1TB256GB, 512GB
IP RatingIP68IPX8
Display (external)2,856 x 1,280 LTPO OLED
495 PPI
Up to 2,000 nits HDR
Up to 3,000 nits peak brightness
1-120Hz refresh rate
2,424 x 1,080 OLED
422 PPI
Up to 1,800 nits HDR
Up to 2,700 nits peak brightness
60-120Hz refresh rate
Wired Charging27W21W
Wireless Charging21W - Pixel Stand (2nd gen)
12W - Qi
7.5W
Battery ShareYesNo
Rear camerasMain
50 MP Octa PD
ƒ/1.68 aperture
82° field of view
1/1.31" image sensor

Ultrawide
48 MP Quad PD
Autofocus
ƒ/1.7 aperture
123° field of view
1/2.55" image sensor

Telephoto
48 MP Quad PD
ƒ/2.8 aperture
22° field of view
1/2.55" image sensor
5x optical zoom
Super Res Zoom up to 30x
Main
48 MP Quad PD
ƒ/1.7 aperture
82° field of view
1/2" image sensor

Ultrawide
10.5 MP Dual PD
Autofocus
ƒ/2.2 aperture
127° field of view
1/3.4" image sensor

Telephoto
10.8 MP Dual PD
ƒ/3.1 aperture
23° field of view
1/3.2" image sensor
5x optical zoom
Super Res Zoom up to 20x
Selfie camera42 MP Dual PD
ƒ/2.2 aperture
103° field of view
Autofocus
10 MP Dual PD
ƒ/2.2 aperture
87° field of view
8K Video BoostYesNo
Ultrawide and Telephoto Video BoostYesNo
Cinematic BlurYesNo (only Pan)
Action PanYesNo

But perhaps the biggest offense is found in the camera department. Yes, the Pixel 9 Pro Fold has a competent triple-camera array, but it’s not in the same league as the Pixel 9 Pro and Pro XL. The main, ultrawide, and telephoto sensors are all notably smaller in the Fold and offer inferior autofocus, meaning more noise in low-light environments and a greater reliance on Google’s admittedly excellent software to plug the gaps. While probably not noticeable in daylight, these differences are bound to show up in more difficult shooting situations and when using features like Astrophotography.

For a brand that prides itself on photography, the Fold has a lot of downgrades compared to Google's best.

The lower-resolution telephoto camera also can’t take 10x “optical quality” crops and only supports Super Res zoom out to 20x compared to 30x on the other two. That means inferior-looking snaps when zooming in at concerts or trying to snap distant wildlife. Likewise, the selfie camera is closer in resolution to the affordable Pixel 9 than the much-upgraded sensors in the Pro and XL, but it doesn’t list autofocus either, again hinting at weaker performance in difficult lighting.

If that wasn’t bad enough, the Fold also doesn’t receive 8K cloud-based video upscaling, putting it in the same basket as the significantly cheaper Pixel 9. It can’t take full-res (48MP) photos, there’s no cinematic blur, no action pan. Ouch. For a brand that prides itself on media capture, the Fold has a lot of downgrades compared to Google’s best setup.

Google Pixel 9 Pro Fold in Porcelain with a close up of the rear camera module

Credit: C. Scott Brown / Android Authority

This isn’t to say the Fold is miles off the pace of the non-folding Pros. It still has the remainder of Google’s best software features and Pro camera controls. Plus, the Pixel 9 Pro Fold opens up a whole new world of multitasking and content viewing on that large inner display. It’s also improved over Google’s previous attempt, providing more years of support, a thinner frame, and a portrait-oriented inner display.

This is still a flagship foldable, but with an eyewatering $1,799 price tag and a new Pro moniker, you’d be forgiven for thinking you’re buying the absolute best of everything that Google has to offer. However, looking past the rename, it’s clear that the Pixel 9 Pro Fold doesn’t match the rest of the 9 Pro series in every facet, particularly photography. That’s a letdown.

I saved $100s building my own NAS home server

DIY NAS home server
Credit: Robert Triggs / Android Authority

Self-hosting your data and services with Network Attached Storage (NAS) is a great way to free yourself from the spiraling costs and tangled web of subscription fees. Whether you’re simply looking to back up your photos or stream 4K movies on your travels, there’s a wide range of products to pick from, but not quite so many to suit all budgets.

If you’ve been tempted by one of the best NAS systems but are put off by the expense or lack of gradual upgrade paths, building a cheap DIY NAS could be a better alternative for you.

Pairing Android with Windows File Explorer is the feature I didn’t know I badly needed

If you’re not already using it, Windows Phone Link is an increasingly useful tool for Android users who own PCs. In addition to notification, text, and call synchronization, earlier this year, Microsoft enabled us to use our phones as webcams — helpful if you have an old laptop with a poor quality cam but not exactly essential. However, the latest integration of Android files directly into Windows File Explorer without the need for a wire is a feature I’m wondering how I ever lived without.

Linking your Android phone to File Explorer does exactly what you’d expect: your smartphone files are listed within Windows File Explorer, seamlessly integrated alongside regular PC files, OneDrive, and any other storage you might have attached. While previously that was only possible over USB, this new implementation uses Phone Link to manage everything wirelessly. You can open your phone’s files, copy from Android to PC and vice versa, and rename, move, and delete files all over the air.

Smartphone marketing demystified: The specs that matter, those that don’t

While it’s hard to go wrong with any of today’s top-tier smartphones, ending up with the best bang for your buck or splitting the mid-range wheat from the chaff is still often a case of deciphering a phone’s spec sheet. This already laborious task isn’t helped by the marketing gobbledegook thrown around by various brands in a bid to make their otherwise mediocre handsets stand out. Just what the heck is “virtual RAM” anyhow?

To help, let’s break down all the key smartphone specifications and highlight what to look out for — and what to ignore — when making your next purchase.

Your phone’s brain: The processor

Good specs:
  • Snapdragon 8XX or 7XX series
  • Tensor, Exynos 2XXX, Dimensity 9XXXX
Ignore:
  • Undisclosed “octa-core” CPU
  • An old chip that’s nearing end-of-life

We’ll start with the processor (or SoC) first. Weirdly, this is both the most and least important aspect of your phone, depending on what you expect from your next handset. If you have to have the absolute best performance, features, and networking capabilities, then a flagship chip is a must, but often these features are surplus to requirements.

There are too many chipsets to get into them all, but virtually every smartphone processor built since the turn of the decade is ample for running key mobile tasks: browsing Facebook, scrolling Insta, that sort of thing. Google’s Pixel range is a prime example of smartphones that don’t pack the absolutely fastest processors around yet still offer one of the best mobile experiences in their price brackets. It’s more about what your phone can do than what it benchmarks. That said, I’d urge everyone to avoid the bottom-of-the-barrel processors you’ll still find in ultra-affordable handsets, if it can be helped. Anything that lists itself as little more than an “octa-core” processor is still probably bad news.

Ignore core counts and GHz; you need to look at a chip's broader capabilities.

iPhones, of course, all sport high-end chips, so there’s little issue (or choice) here anyway. If you want to be sure of top-tier Android performance, stick to flagship-grade chips from the big players. Qualcomm’s Snapdragon 8 and 8S, MediaTek’s Dimensity 9___, Samsung’s Exynos 2___, and Google’s Tensor series are all rock solid, even if you pick a model that’s a generation or two old at this point. High-end mobile gamers, however, will find the latest features, such as ray tracing, and the fastest performance on the latest processors, such as a phone with a Snapdragon 8 Gen 3, which is also a boon for heavy multitaskers and those who edit their photos and videos on the go. It’s those less mainstream use cases that really benefit from focusing on the processor as a key component, but even then, you have to consider thermals and cooling as well, and bigger phones tend to do better at that.

If you’re on a tighter budget, sliding down to the Snapdragon 7 or even 6 series, along with MediaTek’s more recent Dimensity 8XXX range, is a fair compromise that won’t disappoint on the networking or security fronts, and even AI capabilities are quickly making their way down to these price points.

How does the phone perform under stress, and does it have the gaming, AI, or other features you want?

Of course, you can get pretty granular on all the internal processor differences. CPU core counts and microarchitectures for general processing, GPU for gaming and other graphics, ISP capabilities for pictures and video, and the latest trends in NPUs for AI. While interesting from an enthusiast standpoint, we can’t mix and match these parts ourselves, and it would be a waste of time to make a purchasing decision based on specs like clock speed GHz or AI TOPS. It’s less of a headache to follow the general portfolio trends outlined above and pay attention to the on-device features that a given handset is capable of and maybe a benchmark or two if you need higher-end performance.

The bottom line is that picking the best processor used to matter a lot more than it does today. However, elite gaming and AI are starting to shift focus back to the flagship-tier chipsets once more.

Cameras, cameras, cameras

OPPO HONOR and Xiaomi camera phones

Credit: Robert Triggs / Android Authority
Good specs:
  • Wide, ultrawide, and telephoto combo
  • Wide aperture on the main and tele
  • Good-sized sensors on all lenses
Ignore:
  • Counting megapixels
  • Ultra-long range zoom claims
  • Macro lenses

For most people, their smartphone is their primary camera. As such, navigating this increasingly complex area of a modern smartphone is a must, but it isn’t easy. First, let’s dive through the key terms.

  • Megapixels — More is better? Well, it depends. In theory, more pixels mean more detail, providing enough light to make it to the tiny pixel. More pixels in a small space means less light per pixel, which can reduce dynamic range, increase noise, or longer shutter speeds. Not good. Modern pixel-binning sensors aim to get around this by merging data from nearby cells while allowing for high-resolution photography, but you’re often left shooting at a lower resolution by default. Still, remember that just 12MP is more than enough for a 12-inch print. Don’t be swayed by the allure of a 200MP sensor.
  • Sensor size — The flip size of megapixels is the overall sensor size; the bigger the sensor, the bigger the pixels, and the better the light capture. 1-inch is as large as we’ve seen in smartphones, though around 1/1.3-inch is more typical for primary cameras and often much smaller for secondary and third cameras. Sensors below 1/2 are small by modern standards and won’t pair well with high megapixel counts or low-light environments. Bigger is better, but that comes with a larger camera bump as a trade-off, so there’s a limit, and around 1/1.5 inches or above is adequate.
  • Aperture — Part of the “exposure triangle,” the aperture measures how wide the lens opening is. Again, wider means more light, which is good, and more bokeh, which is also deemed good (mostly). However, very wide apertures and very large sensors can struggle with partial subject focus, particularly at close ranges, and they don’t make for the sharpest landscapes. Thankfully, variable aperture technology gives you the best of both worlds, but it’s only found in a handful of premium smartphones. Don’t dwell on this spec, but be cautious of any smartphone lens with an aperture below f/3; it probably won’t be very good in low light.
  • Focal length/zoom — These are two halves of the same coin; divide two lens’ focal lengths and you get the zoom factor when switching between them. For example, a 75mm telephoto lens has 3x the zoom factor of a 25mm lens. Paying attention to the optical zoom levels a phone has is important; you’ll receive the best image quality at these points. Factors in between will rely on software upscaling of some kind, which leaves a big gap between, say, a 1x and 5x lens. Equally, focal length tells you a little bit about what the lens is good for. Below 20mm is extremely ultrawide, good for landscapes and broad scenes but at the expense of distorted proportions. 35 mm is roughly equivalent to the human eye’s field of view, 50mm or so is considered the most flattering for portraits, and 100mm or more is a long-range zoom. Also, ignore any claims of 50x or 100x zoom; those are always digital and look terrible.

We could dive deeper into autofocus technologies (make sure your wide lens has AF at least!), backplane types, and the like, but that’s getting too deep into the weeds for this article and probably shouldn’t sway your entire phone choice unless you’re after something very, very specific. Instead, the next step is to look at what camera lenses the phone has. These typically fall into five categories: ultrawide, wide/primary, telephoto, periscope, and macro.

99% of the time, a dedicated macro camera is just there to pad out the numbers. They’re usually low resolution, tiny, and basically bad. Pretend the phone doesn’t have it; you’ll likely forget about it anyway. A wide and ultrawide pairing is most common in the mid-range market, offering a step back to fit more in but lacking long-range or truly portrait-friendly capabilities. Telephoto and periscope are two different ways of building a zoom camera; the latter bounces light off a mirror or two, creating a longer focal length but losing some light in the process. Ultra-premium phones regularly offer two zoom cameras to cover multiple distances with high quality. 3x to 5x is good for portraits and nearby subjects, while 10x will capture those concert stages. There are no strict winners here; take your pick based on the type of photos you typically take.

How many GB of space do I need?

Smartphone Specs Closeup

Credit: Robert Triggs / Android Authority
Good specs:
  • 256GB for multimedia
  • UFS4 storage type is the fastest
Ignore:
  • microSD card support (rare and often slow)
  • eMMC storage (slow and outdated)

Just like the processor, how much physical storage space (in gigabytes or GB) you need depends on how you use your phone. If you just make calls, check emails, and browse the same four websites, you can probably get away with a smaller storage option. But if you’re a gamer, photographer, or meme archivist, you’ll need a more forgiving amount of space.

Even though it’s still often the base configuration, 128GB isn’t all that much storage in the age of mass media and mobile photography. Subtracting the size of the OS and some apps, you might be lucky to be left with 80GB free for other content. That’s the equivalent of roughly 10,000 8MB JPEG photos, 20,000 four-minute MP3 tracks, or 80 hours of compressed 1080p video. That sounds like a lot, but bringing years of old pictures and whatever else to a new phone eats further into this. While you can mitigate physical limitations with cloud storage, that’s an expensive solution in the long term.

If you’re the designated family photographer, I recommend 256GB at minimum. You might even want to future-proof your purchase with 512GB, though those upfront prices can be eyewatering.

The other factor to consider is storage speed. While most flagships use the fastest storage available (UFS 4 at the time of writing), budget options often use slightly slower versions like UFS 3.1 or even 2.0. Mostly, this will marginally affect large app or game loading times or your phone’s ability to record very high-resolution (4K or 8K) video, which is less of a requirement for budget models anyway. I’d avoid anything still listed with eMMC storage, as that’s positively outdated.

Dazzling displays

samsung galaxy s24 ultra vs galaxy s23 ultra reflectivity screen on

Credit: Ryan Haines / Android Authority
Good specs:
  • Dynamic refresh rate (1-120Hz)
  • HBM (High Brightness Mode)
  • High PWM rate
Ignore:
  • Peak brightness in nits
  • 4K resolution
  • Niche HDR formats

Display technology has long been a battleground between the senses and snake oil. There’s a load we could get into here, from aspect ratios and contrast to sub-pixel layouts and refresh rates. Let’s hit those key terms again.

  • Resolution — Can you see the difference between 4K and 1080p on a 6-inch screen watching a compressed YouTube video? Absolutely not. In fact, your phone almost certainly defaults to an FHD+ software resolution, even if it has a QHD+ hardware panel, to help save on battery. An FHD+ resolution (above or around 1,920 x 1,080, accommodating for aspect ratio) is sufficient, even for a large form factor phone; consider anything above that a bonus, but don’t quibble over a few pixels
  • Brightness — Ripe for exploitation, peak brightness (in nits) is not a hugely helpful metric on its own because it fails to tell you under what circumstances this brightness is achieved and if it’s sustained. Often, the largest metric you see here refers to instantaneous peak brightness in a very localized part of the screen, such as when viewing HDR content. Ignore claims of 4,500 nits. 200 – 300 nits is all you need for indoor viewing, and 600 – 800 for outdoor. Anything above that is a bonus but not strictly necessary. Even if you love to watch HDR video on a tiny screen, peak 1,500 – 2,000 nits is plenty.
  • HDR — HDR technology is a boon for movie viewers, but its benefits are contentious for tiny screens that are often viewed in less-than-ideal conditions. Still, most high-end and even mid-range panels are HDR-capable. They often come in flavors supporting HDR10+ and/or Dolby Vision; take your pick depending on your preferred content format.
  • AMOLED, OLED, etc.— The OLED vs. LCD battle is over, and OLED won. Even many inexpensive smartphones now use some OLED in some form, whether that’s AMOLED, POLED, flexible OLED, or something else derivative, delivering superior viewing angles, contrast, and color. That said, ultra-budget phones still use LCD, and the viewing experience suffers as a result. I’d suggest springing for an OLED panel if you can.
  • Refresh rate — This spec can make more of a meaningful difference to how responsive your phone feels. Scrolling through web pages looks much smoother at 120Hz than at 60Hz, with 90Hz being a decent compromise for mid-range models. What you really want here, though, is an adaptive/dynamic refresh rate, preferably with a display that can go as low as 1Hz to save power when not showing moving content. These are most often LTPO-type displays reserved for the higher end of the market.
  • PWM rate — While refresh rate determines how quickly content updates on the display, PWM (Pulse Width Modulation) controls the actual pulsing rate of the display’s light in order to dim a display so it appears darker. Low PWM rates can cause headaches in the small percentage of users who are sensitive to flickering lights, even in cases where you can’t perceive any flickering. The effect is most acute when dimming the phone’s display when you’re in a dark room. Higher PWM values are good here, and an excess of 1,000Hz helps, but don’t agonize over this if you’re not sensitive.

A huge amount of technology is packed into the latest smartphone displays, and picking out exactly what you want depends on what you need from a display. Higher refresh rates will be most important if you’re a doom-scroller or gamer. If you like to read while commuting, a robust and reliable outdoor peak brightness will be key. Or if you find displays give you a headache while reading in the dark, grab one with a higher PWM rate.

RAM: Don’t just download more

galaxy S20 LPDDR5 Samsung Unpacked 2020

Good specs:
  • 8GB+ LPDDR5X for multitasking
  • 12GB+ LPDDR5X for AI/gaming
Ignore:
  • Virtual RAM

Your phone’s temporary storage, or RAM, is further down this list but nonetheless important, particularly if you’re eyeballing a phone for AI or gaming. 8GB of RAM has been and remains plenty for most mobile multitasking use cases, but if you want to keep lots of apps and games open or run trailblazing AI features from Gemini Nano, you’ll want 12GB or even more.

Equally, those demanding use cases want RAM that’s quick. At the time of writing, LPDDR5X is the fastest available type of RAM, but LPDDR4X is still fine for a budget model where basic multitasking is more important than loading up Genshin Impact.

There is a recent gimmick to be aware of here, though: virtual RAM. You might also see this listed as Dynamic RAM, Memory Expansion, or such, but the idea is the same. This is essentially swap space that stores unused programs in a portion of your main internal storage rather than in RAM. The benefit is that fewer apps will close if you fill up your regular RAM, but storage is slower than RAM, so there’s absolutely no performance benefit for AI or gaming. Virtual RAM is useful for phones with a small amount of real memory, but only to a point, and is not a replacement for proper RAM.

RAM is more important for AI phones. Gloss over virtual RAM, it's not a cure-all.

Virtual RAM allows companies to claim a phone has very large amounts of memory, such as 24GB, but the split may only be 12GB real RAM and 12GB virtual. That’s fine, but there’s not a huge benefit to virtual RAM, especially in such huge sizes. Always check the fine print, particularly on mid-rangers from China, where this trend is more prevalent, and make sure you buy a phone with a healthy chunk of physical RAM.

Charging power and protocols

Xiaomi 14 Ultra charging power test

Credit: Robert Triggs / Android Authority
Good specs:
  • USB Power Delivery (PPS) support
  • Qi wireless charging support
Ignore:
  • 100W or higher in phone

While we’re on the subject of inflated numbers, charging power has to be one of the biggest minefields to navigate in recent memory. It’s not just the Chinese brands claiming 100W or 200W that can catch you out; even Google’s Pixel 6 was caught playing fast and loose.

But more power equals faster charging, right? Well, yes, in theory, but are you measuring at the plug or the phone, how long can you sustain that power, and under what conditions? If I had a dollar for every ultra-high-wattage phone I’ve tested that failed to maintain peak power for more than two minutes, well, I wouldn’t be rich, but you get the idea. If you live in a warm country, these effects will be even worse. Even if you can hit 100W, so what if you’re confined to the in-box charger or bricks from one specific brand? While high power and fast times are nice, we should consider the battery longevity, real charge times, and ecosystem and e-waste trade-offs.

Forget 200W, grab a phone that charges nicely with third-party plugs and power banks.

What’s most important, in my book, is how quickly a phone charges via the USB Power Delivery standard — the default protocol for charging over USB-C. If your phone plays nicely with USB PD (and the newer USB PD PPS), it’ll charge quickly with virtually any modern plug. Around 45W takes even the largest batteries from empty to full in an hour or so, while 65W is properly fast for a phone and suitable for many laptops. 30W or below is on the slower side but still far better than many of the aforementioned proprietary brands that can sit at 18W or under when not using their special blend of brick and cable. Similarly, a phone with Qi or Qi2 wireless charging will play nicely with a range of accessories, even if it charges slower than proprietary standards.

Finally, a word on battery capacity (in mAh). This is too dependent on handset size and other specifications to give a definitive guide. However, 4,000mAh should see most users through a single day, while around 5,000mAh is better for gamers and power users. If you know you use your phone a lot, it’s better to err on the side of a bigger battery.

Maximum durability

Broken Cracked Screen in hand

Credit: Robert Triggs / Android Authority
Good specs:
  • Gorilla Glass protection
  • IP68 rating
Ignore:
  • No-name glass protection
  • Water-resistance claims with no rating

After um-ing and ah-ing about the internals, you should also consider the external hardware protecting your phone. There are two main things to ponder: screen/glass protection and water/dust resistance.

We have a handy guide on IP ratings. Broadly speaking, some level of water protection is a must. Accidents happen, and you’ll be glad you invested in an IP rating when “someone” spills coffee all over that expensive new purchase. We’d suggest an IP54 rating as the bare minimum, with an IP68 rating being the golden standard when spending money on upper-mid and flagship smartphones.

Likewise, glass protection can be the difference between “few!” and “$100s” down the drain and hours wasted organizing a screen replacement. Corning Gorilla Glass is the industry standard, with Victus 2 and Gorilla Armor being the strongest options around. Apple uses Corning’s Ceramic Shield, which touts a similar, if not superior, hardening process, and there are various other industry players offering their own flavors. Comparing the various glass types is fraught with difficulties, but newer tends to be better, so we suggest not picking a phone with anything too dated. Of course, something is better than nothing at all. Oh, and be sure to check if there’s a difference between front and back protection, if your phone has a glass back. There usually is, but you don’t want to trade down too far and end up with a smashed case.

I’d place less emphasis on any metal parts mentioned. While these can marginally affect a phone’s weight, aluminum, titanium, or others offer little to no indication of a phone’s ability to withstand drops or bends, as we’ve seen countless times over the years.

A weak Pixel 9 processor will test Google’s commitment to Pixel 8 Feature Drops

google gemini ask this video

Credit: Rita El Khoury / Android Authority

Opinion post by
Robert Triggs

If you’ve seen our latest Google Pixel 9 exclusive, the phone’s Tensor G4 processor is set to be the smallest change to the series so far. While peak performance has never been a Tensor accolade, there’s little to no upgrade in the chip’s cornerstone AI capabilities either. The Tensor G4 reportedly features exactly the same third-generation TPU, codenamed “rio,” running at the same clock speed as the Tensor G3. The reason is that Google reportedly missed deadlines for a more potent custom chipset, which will now have to wait until the Pixel 10, and had to hastily cobble together an improved Tensor for the Pixel 9 series.

If this holds true, surely the Pixel 8 series should be able to run all of the Pixel 9’s upcoming AI features? Well, the CPU and GPU upgrades appear to be nowhere near big enough to make a meaningful difference to any AI processing, the DSP that runs camera algorithms is the same as last gen, and the identical TPU is the core that binds Google’s on-device AI capabilities together.

I’d argue that the Tensor G4 shares so many core similarities to the G3 that (virtually) the only reason why Google won’t bring its latest features, such as AddMe and Pixel Screenshots, to the Pixel 8 series, at least not in a hurry, is to upsell the Pixel 9. There’s no denying that bringing such features to the Pixel 8 would make Google’s best-ever flagship even more compelling but might undermine launch excitement about the new models, despite the camera and other hardware upgrades. This raises a big question: Just how committed is Google to backporting features via Feature Drops?

Virtually, the only reason Google won't bring the Pixel 9's latest AI features to the Pixel 8 series is to upsell the Pixel 9.

Google’s history with Feature Drops is pretty hit-and-miss. While it has brought plenty of new features to the Pixel lineup over time, we’re still waiting on some of the bigger promises like Zoom Enhance. There’s no guarantee that all of Google’s latest AI features will even be available for the Pixel 9 at launch, so any hope of features making their way back to the Pixel 8 series in a timely manner feels remote. Still, the similarities between the Tensor G4 and G3 make this all the more possible than in previous years, so here’s hoping that, even if there’s a reasonable delay, we see as many Pixel 9 AI features on the Pixel 8 as possible.

There’s one exception to all this — RAM. On-device AI is RAM heavy; it’s the reason the lower-specced Pixel 8 didn’t initially ship with Gemini Nano, while the 8 Pro’s larger 12GB RAM pool made it possible. According to leaks, the Pixel 9 is expected to ship with 12GB of RAM, and the Pro models will all receive a boost to 16GB. That’s a lot more memory than the baseline Pixel 8, but 12GB matches the capabilities of the Pixel 8 Pro. Once again, then, it looks like the more affordable Pixel 8 stands to miss out, but Google’s previous premium model should be capable of matching the Pixel 9. However, just how broken up Google’s AI feature set will become across models remains to be seen.

The Pixel 8's 8GB RAM might be too small, but the 8 Pro is capable of matching the base Pixel 9.

Even so, a processor with few upgrades cuts through the usual inter-generational barrier and puts Google and the Pixel series in an interesting position. Can Google rely on the superb hardware upgrades alone to sell the Pixel 9 while using this opportunity to show that the best software features can transcend generations? We’re already questioning whether seven years of updates really mean the same thing as seven years of cutting-edge features. Google could put the Pixel series on the map as an evolving platform for the industry’s best AI technology, regardless of which generation you buy in. But it’ll have to sacrifice a little Pixel 9 prestige to do so.

Faster charging means I’m buying the Pixel 9 Pro XL

Opinion post by
Robert Triggs

There aren’t many surprises left with the upcoming Google Pixel 9 series, but I was pleased to learn that the Pixel 9 Pro XL will sport faster charging when it launches in just a few days time. Initially, I was drawn to the obvious appeal of a smaller and more pocketable Pro, but faster charging is seriously pulling me towards the XL.

See, I love my Pixel 8 Pro but it’s still painfully slow to fully charge, despite gradual generational improvements. My phone floats worryingly below the 50% mark most of the time (it’s on 31% right now) — the Battery Saver chime no longer phases me, and I’m no stranger to Google’s Extreme Battery Saver prompt either. Battery anxiety? What’s battery anxiety?

It’s not that the phone’s battery life is bad — far from it. The “problem” is that I only ever leave my Pixel to charge for half an hour or so at a time, but the phone takes about 80 minutes to fill. This isn’t an issue on work days when I have a charger close by, but I’ve lost count of the weekends I’ve silently prayed to the battery gods to extend 10% into a couple more hours.

Robs horrible charging routine

Credit: Robert Triggs / Android Authority

Faster charging means the Pixel 9 Pro XL is the pick for chaotic chargers (like me).

I can go a whole week without my Pixel hitting 100% charge, which I know is abnormal. I could charge my phone overnight like a regular person (sometimes I do), but I’m afflicted with doom-scrolling my way through the night (the curse of a restless toddler) and rolling like a crocodile after eventually passing out from boredom. It’s a recipe for a USB-C necklace.

Clearly, I’m a quick-top-up guy more suited to the blazing-fast speeds of a OnePlus handset than Google’s conservative approach, but I can’t leave that Pixel camera and software behind. I long for a Pixel that can hit 70% after 30 minutes on the plug rather than 50% to keep those Battery Saver notifications at bay. Thankfully, that’s exactly what the Pixel 9 Pro XL promises. Unfortunately, the smaller Pixel 9 and 9 Pro will only hit 55% in half an hour, exactly the same as I recorded for the current-gen models.

C'mon Google, why leave the broader Pixel 9 series stuck with sluggish power levels?

We still don’t know the exact power level, but the info we have suggests that the Pixel 9 Pro XL has roughly 5-6W more peak power than the 8 Pro, so something like 33W. The Pixel 8 Pro pulls around 27W from the wall, by comparison. The only downside is that you’ll need Google’s new 45W charger to hit those levels unless you already have a powerful USB PD PSS plug lying around.

Charging speed in the reportsAdvertised charging speed
Pixel 924.12 W?
Pixel 9 Pro25.20 W?
Pixel 9 Pro XL32.67 W?
Pixel 9 Pro Fold20.25 W?
Pixel 824.66 W27 W
Pixel 8 Pro26.91 W30 W
Pixel Fold22.5 W23 W?

33W is not a huge jump and probably won’t reduce the phone’s time to full by all that much. It’s certainly not going to rival the likes of SuperVOOC-powered phones and maybe not even Samsung’s 45W Galaxy S24 Ultra. I’d still like to see the Pixel series as a whole charge much faster too, but it looks like that’s not happening this year.

Still, a boost to the early stages should leave chaotic chargers like me with more juice in the tank from just a quick top-up. I’d be happy with that. Pixel 9 Pro XL it is then, I suppose.

Smartphone marketing demystified: The specs that matter, those that don’t

While it’s hard to go wrong with any of today’s top-tier smartphones, ending up with the best bang for your buck or splitting the mid-range wheat from the chaff is still often a case of deciphering a phone’s spec sheet. This already laborious task isn’t helped by the marketing gobbledegook thrown around by various brands in a bid to make their otherwise mediocre handsets stand out. Just what the heck is “virtual RAM” anyhow?

To help, let’s break down all the key smartphone specifications and highlight what to look out for — and what to ignore — when making your next purchase.

Your phone’s brain: The processor

Good specs:
  • Snapdragon 8XX or 7XX series
  • Tensor, Exynos 2XXX, Dimensity 9XXXX
Ignore:
  • Undisclosed “octa-core” CPU
  • An old chip that’s nearing end-of-life

We’ll start with the processor (or SoC) first. Weirdly, this is both the most and least important aspect of your phone, depending on what you expect from your next handset. If you have to have the absolute best performance, features, and networking capabilities, then a flagship chip is a must, but often these features are surplus to requirements.

There are too many chipsets to get into them all, but virtually every smartphone processor built since the turn of the decade is ample for running key mobile tasks: browsing Facebook, scrolling Insta, that sort of thing. Google’s Pixel range is a prime example of smartphones that don’t pack the absolutely fastest processors around yet still offer one of the best mobile experiences in their price brackets. It’s more about what your phone can do than what it benchmarks. That said, I’d urge everyone to avoid the bottom-of-the-barrel processors you’ll still find in ultra-affordable handsets, if it can be helped. Anything that lists itself as little more than an “octa-core” processor is still probably bad news.

Ignore core counts and GHz; you need to look at a chip's broader capabilities.

iPhones, of course, all sport high-end chips, so there’s little issue (or choice) here anyway. If you want to be sure of top-tier Android performance, stick to flagship-grade chips from the big players. Qualcomm’s Snapdragon 8 and 8S, MediaTek’s Dimensity 9___, Samsung’s Exynos 2___, and Google’s Tensor series are all rock solid, even if you pick a model that’s a generation or two old at this point. High-end mobile gamers, however, will find the latest features, such as ray tracing, and the fastest performance on the latest processors, such as a phone with a Snapdragon 8 Gen 3, which is also a boon for heavy multitaskers and those who edit their photos and videos on the go. It’s those less mainstream use cases that really benefit from focusing on the processor as a key component, but even then, you have to consider thermals and cooling as well, and bigger phones tend to do better at that.

If you’re on a tighter budget, sliding down to the Snapdragon 7 or even 6 series, along with MediaTek’s more recent Dimensity 8XXX range, is a fair compromise that won’t disappoint on the networking or security fronts, and even AI capabilities are quickly making their way down to these price points.

How does the phone perform under stress, and does it have the gaming, AI, or other features you want?

Of course, you can get pretty granular on all the internal processor differences. CPU core counts and microarchitectures for general processing, GPU for gaming and other graphics, ISP capabilities for pictures and video, and the latest trends in NPUs for AI. While interesting from an enthusiast standpoint, we can’t mix and match these parts ourselves, and it would be a waste of time to make a purchasing decision based on specs like clock speed GHz or AI TOPS. It’s less of a headache to follow the general portfolio trends outlined above and pay attention to the on-device features that a given handset is capable of and maybe a benchmark or two if you need higher-end performance.

The bottom line is that picking the best processor used to matter a lot more than it does today. However, elite gaming and AI are starting to shift focus back to the flagship-tier chipsets once more.

Cameras, cameras, cameras

OPPO HONOR and Xiaomi camera phones

Credit: Robert Triggs / Android Authority
Good specs:
  • Wide, ultrawide, and telephoto combo
  • Wide aperture on the main and tele
  • Good-sized sensors on all lenses
Ignore:
  • Counting megapixels
  • Ultra-long range zoom claims
  • Macro lenses

For most people, their smartphone is their primary camera. As such, navigating this increasingly complex area of a modern smartphone is a must, but it isn’t easy. First, let’s dive through the key terms.

  • Megapixels — More is better? Well, it depends. In theory, more pixels mean more detail, providing enough light to make it to the tiny pixel. More pixels in a small space means less light per pixel, which can reduce dynamic range, increase noise, or longer shutter speeds. Not good. Modern pixel-binning sensors aim to get around this by merging data from nearby cells while allowing for high-resolution photography, but you’re often left shooting at a lower resolution by default. Still, remember that just 12MP is more than enough for a 12-inch print. Don’t be swayed by the allure of a 200MP sensor.
  • Sensor size — The flip size of megapixels is the overall sensor size; the bigger the sensor, the bigger the pixels, and the better the light capture. 1-inch is as large as we’ve seen in smartphones, though around 1/1.3-inch is more typical for primary cameras and often much smaller for secondary and third cameras. Sensors below 1/2 are small by modern standards and won’t pair well with high megapixel counts or low-light environments. Bigger is better, but that comes with a larger camera bump as a trade-off, so there’s a limit, and around 1/1.5 inches or above is adequate.
  • Aperture — Part of the “exposure triangle,” the aperture measures how wide the lens opening is. Again, wider means more light, which is good, and more bokeh, which is also deemed good (mostly). However, very wide apertures and very large sensors can struggle with partial subject focus, particularly at close ranges, and they don’t make for the sharpest landscapes. Thankfully, variable aperture technology gives you the best of both worlds, but it’s only found in a handful of premium smartphones. Don’t dwell on this spec, but be cautious of any smartphone lens with an aperture below f/3; it probably won’t be very good in low light.
  • Focal length/zoom — These are two halves of the same coin; divide two lens’ focal lengths and you get the zoom factor when switching between them. For example, a 75mm telephoto lens has 3x the zoom factor of a 25mm lens. Paying attention to the optical zoom levels a phone has is important; you’ll receive the best image quality at these points. Factors in between will rely on software upscaling of some kind, which leaves a big gap between, say, a 1x and 5x lens. Equally, focal length tells you a little bit about what the lens is good for. Below 20mm is extremely ultrawide, good for landscapes and broad scenes but at the expense of distorted proportions. 35 mm is roughly equivalent to the human eye’s field of view, 50mm or so is considered the most flattering for portraits, and 100mm or more is a long-range zoom. Also, ignore any claims of 50x or 100x zoom; those are always digital and look terrible.

We could dive deeper into autofocus technologies (make sure your wide lens has AF at least!), backplane types, and the like, but that’s getting too deep into the weeds for this article and probably shouldn’t sway your entire phone choice unless you’re after something very, very specific. Instead, the next step is to look at what camera lenses the phone has. These typically fall into five categories: ultrawide, wide/primary, telephoto, periscope, and macro.

99% of the time, a dedicated macro camera is just there to pad out the numbers. They’re usually low resolution, tiny, and basically bad. Pretend the phone doesn’t have it; you’ll likely forget about it anyway. A wide and ultrawide pairing is most common in the mid-range market, offering a step back to fit more in but lacking long-range or truly portrait-friendly capabilities. Telephoto and periscope are two different ways of building a zoom camera; the latter bounces light off a mirror or two, creating a longer focal length but losing some light in the process. Ultra-premium phones regularly offer two zoom cameras to cover multiple distances with high quality. 3x to 5x is good for portraits and nearby subjects, while 10x will capture those concert stages. There are no strict winners here; take your pick based on the type of photos you typically take.

How many GB of space do I need?

Smartphone Specs Closeup

Credit: Robert Triggs / Android Authority
Good specs:
  • 256GB for multimedia
  • UFS4 storage type is the fastest
Ignore:
  • microSD card support (rare and often slow)
  • eMMC storage (slow and outdated)

Just like the processor, how much physical storage space (in gigabytes or GB) you need depends on how you use your phone. If you just make calls, check emails, and browse the same four websites, you can probably get away with a smaller storage option. But if you’re a gamer, photographer, or meme archivist, you’ll need a more forgiving amount of space.

Even though it’s still often the base configuration, 128GB isn’t all that much storage in the age of mass media and mobile photography. Subtracting the size of the OS and some apps, you might be lucky to be left with 80GB free for other content. That’s the equivalent of roughly 10,000 8MB JPEG photos, 20,000 four-minute MP3 tracks, or 80 hours of compressed 1080p video. That sounds like a lot, but bringing years of old pictures and whatever else to a new phone eats further into this. While you can mitigate physical limitations with cloud storage, that’s an expensive solution in the long term.

If you’re the designated family photographer, I recommend 256GB at minimum. You might even want to future-proof your purchase with 512GB, though those upfront prices can be eyewatering.

The other factor to consider is storage speed. While most flagships use the fastest storage available (UFS 4 at the time of writing), budget options often use slightly slower versions like UFS 3.1 or even 2.0. Mostly, this will marginally affect large app or game loading times or your phone’s ability to record very high-resolution (4K or 8K) video, which is less of a requirement for budget models anyway. I’d avoid anything still listed with eMMC storage, as that’s positively outdated.

Dazzling displays

samsung galaxy s24 ultra vs galaxy s23 ultra reflectivity screen on

Credit: Ryan Haines / Android Authority
Good specs:
  • Dynamic refresh rate (1-120Hz)
  • HBM (High Brightness Mode)
  • High PWM rate
Ignore:
  • Peak brightness in nits
  • 4K resolution
  • Niche HDR formats

Display technology has long been a battleground between the senses and snake oil. There’s a load we could get into here, from aspect ratios and contrast to sub-pixel layouts and refresh rates. Let’s hit those key terms again.

  • Resolution — Can you see the difference between 4K and 1080p on a 6-inch screen watching a compressed YouTube video? Absolutely not. In fact, your phone almost certainly defaults to an FHD+ software resolution, even if it has a QHD+ hardware panel, to help save on battery. An FHD+ resolution (above or around 1,920 x 1,080, accommodating for aspect ratio) is sufficient, even for a large form factor phone; consider anything above that a bonus, but don’t quibble over a few pixels
  • Brightness — Ripe for exploitation, peak brightness (in nits) is not a hugely helpful metric on its own because it fails to tell you under what circumstances this brightness is achieved and if it’s sustained. Often, the largest metric you see here refers to instantaneous peak brightness in a very localized part of the screen, such as when viewing HDR content. Ignore claims of 4,500 nits. 200 – 300 nits is all you need for indoor viewing, and 600 – 800 for outdoor. Anything above that is a bonus but not strictly necessary. Even if you love to watch HDR video on a tiny screen, peak 1,500 – 2,000 nits is plenty.
  • HDR — HDR technology is a boon for movie viewers, but its benefits are contentious for tiny screens that are often viewed in less-than-ideal conditions. Still, most high-end and even mid-range panels are HDR-capable. They often come in flavors supporting HDR10+ and/or Dolby Vision; take your pick depending on your preferred content format.
  • AMOLED, OLED, etc.— The OLED vs. LCD battle is over, and OLED won. Even many inexpensive smartphones now use some OLED in some form, whether that’s AMOLED, POLED, flexible OLED, or something else derivative, delivering superior viewing angles, contrast, and color. That said, ultra-budget phones still use LCD, and the viewing experience suffers as a result. I’d suggest springing for an OLED panel if you can.
  • Refresh rate — This spec can make more of a meaningful difference to how responsive your phone feels. Scrolling through web pages looks much smoother at 120Hz than at 60Hz, with 90Hz being a decent compromise for mid-range models. What you really want here, though, is an adaptive/dynamic refresh rate, preferably with a display that can go as low as 1Hz to save power when not showing moving content. These are most often LTPO-type displays reserved for the higher end of the market.
  • PWM rate — While refresh rate determines how quickly content updates on the display, PWM (Pulse Width Modulation) controls the actual pulsing rate of the display’s light in order to dim a display so it appears darker. Low PWM rates can cause headaches in the small percentage of users who are sensitive to flickering lights, even in cases where you can’t perceive any flickering. The effect is most acute when dimming the phone’s display when you’re in a dark room. Higher PWM values are good here, and an excess of 1,000Hz helps, but don’t agonize over this if you’re not sensitive.

A huge amount of technology is packed into the latest smartphone displays, and picking out exactly what you want depends on what you need from a display. Higher refresh rates will be most important if you’re a doom-scroller or gamer. If you like to read while commuting, a robust and reliable outdoor peak brightness will be key. Or if you find displays give you a headache while reading in the dark, grab one with a higher PWM rate.

RAM: Don’t just download more

galaxy S20 LPDDR5 Samsung Unpacked 2020

Good specs:
  • 8GB+ LPDDR5X for multitasking
  • 12GB+ LPDDR5X for AI/gaming
Ignore:
  • Virtual RAM

Your phone’s temporary storage, or RAM, is further down this list but nonetheless important, particularly if you’re eyeballing a phone for AI or gaming. 8GB of RAM has been and remains plenty for most mobile multitasking use cases, but if you want to keep lots of apps and games open or run trailblazing AI features from Gemini Nano, you’ll want 12GB or even more.

Equally, those demanding use cases want RAM that’s quick. At the time of writing, LPDDR5X is the fastest available type of RAM, but LPDDR4X is still fine for a budget model where basic multitasking is more important than loading up Genshin Impact.

There is a recent gimmick to be aware of here, though: virtual RAM. You might also see this listed as Dynamic RAM, Memory Expansion, or such, but the idea is the same. This is essentially swap space that stores unused programs in a portion of your main internal storage rather than in RAM. The benefit is that fewer apps will close if you fill up your regular RAM, but storage is slower than RAM, so there’s absolutely no performance benefit for AI or gaming. Virtual RAM is useful for phones with a small amount of real memory, but only to a point, and is not a replacement for proper RAM.

RAM is more important for AI phones. Gloss over virtual RAM, it's not a cure-all.

Virtual RAM allows companies to claim a phone has very large amounts of memory, such as 24GB, but the split may only be 12GB real RAM and 12GB virtual. That’s fine, but there’s not a huge benefit to virtual RAM, especially in such huge sizes. Always check the fine print, particularly on mid-rangers from China, where this trend is more prevalent, and make sure you buy a phone with a healthy chunk of physical RAM.

Charging power and protocols

Xiaomi 14 Ultra charging power test

Credit: Robert Triggs / Android Authority
Good specs:
  • USB Power Delivery (PPS) support
  • Qi wireless charging support
Ignore:
  • 100W or higher in phone

While we’re on the subject of inflated numbers, charging power has to be one of the biggest minefields to navigate in recent memory. It’s not just the Chinese brands claiming 100W or 200W that can catch you out; even Google’s Pixel 6 was caught playing fast and loose.

But more power equals faster charging, right? Well, yes, in theory, but are you measuring at the plug or the phone, how long can you sustain that power, and under what conditions? If I had a dollar for every ultra-high-wattage phone I’ve tested that failed to maintain peak power for more than two minutes, well, I wouldn’t be rich, but you get the idea. If you live in a warm country, these effects will be even worse. Even if you can hit 100W, so what if you’re confined to the in-box charger or bricks from one specific brand? While high power and fast times are nice, we should consider the battery longevity, real charge times, and ecosystem and e-waste trade-offs.

Forget 200W, grab a phone that charges nicely with third-party plugs and power banks.

What’s most important, in my book, is how quickly a phone charges via the USB Power Delivery standard — the default protocol for charging over USB-C. If your phone plays nicely with USB PD (and the newer USB PD PPS), it’ll charge quickly with virtually any modern plug. Around 45W takes even the largest batteries from empty to full in an hour or so, while 65W is properly fast for a phone and suitable for many laptops. 30W or below is on the slower side but still far better than many of the aforementioned proprietary brands that can sit at 18W or under when not using their special blend of brick and cable. Similarly, a phone with Qi or Qi2 wireless charging will play nicely with a range of accessories, even if it charges slower than proprietary standards.

Finally, a word on battery capacity (in mAh). This is too dependent on handset size and other specifications to give a definitive guide. However, 4,000mAh should see most users through a single day, while around 5,000mAh is better for gamers and power users. If you know you use your phone a lot, it’s better to err on the side of a bigger battery.

Maximum durability

Broken Cracked Screen in hand

Credit: Robert Triggs / Android Authority
Good specs:
  • Gorilla Glass protection
  • IP68 rating
Ignore:
  • No-name glass protection
  • Water-resistance claims with no rating

After um-ing and ah-ing about the internals, you should also consider the external hardware protecting your phone. There are two main things to ponder: screen/glass protection and water/dust resistance.

We have a handy guide on IP ratings. Broadly speaking, some level of water protection is a must. Accidents happen, and you’ll be glad you invested in an IP rating when “someone” spills coffee all over that expensive new purchase. We’d suggest an IP54 rating as the bare minimum, with an IP68 rating being the golden standard when spending money on upper-mid and flagship smartphones.

Likewise, glass protection can be the difference between “few!” and “$100s” down the drain and hours wasted organizing a screen replacement. Corning Gorilla Glass is the industry standard, with Victus 2 and Gorilla Armor being the strongest options around. Apple uses Corning’s Ceramic Shield, which touts a similar, if not superior, hardening process, and there are various other industry players offering their own flavors. Comparing the various glass types is fraught with difficulties, but newer tends to be better, so we suggest not picking a phone with anything too dated. Of course, something is better than nothing at all. Oh, and be sure to check if there’s a difference between front and back protection, if your phone has a glass back. There usually is, but you don’t want to trade down too far and end up with a smashed case.

I’d place less emphasis on any metal parts mentioned. While these can marginally affect a phone’s weight, aluminum, titanium, or others offer little to no indication of a phone’s ability to withstand drops or bends, as we’ve seen countless times over the years.

A weak Pixel 9 processor will test Google’s commitment to Pixel 8 Feature Drops

google gemini ask this video

Credit: Rita El Khoury / Android Authority

Opinion post by
Robert Triggs

If you’ve seen our latest Google Pixel 9 exclusive, the phone’s Tensor G4 processor is set to be the smallest change to the series so far. While peak performance has never been a Tensor accolade, there’s little to no upgrade in the chip’s cornerstone AI capabilities either. The Tensor G4 reportedly features exactly the same third-generation TPU, codenamed “rio,” running at the same clock speed as the Tensor G3. The reason is that Google reportedly missed deadlines for a more potent custom chipset, which will now have to wait until the Pixel 10, and had to hastily cobble together an improved Tensor for the Pixel 9 series.

If this holds true, surely the Pixel 8 series should be able to run all of the Pixel 9’s upcoming AI features? Well, the CPU and GPU upgrades appear to be nowhere near big enough to make a meaningful difference to any AI processing, the DSP that runs camera algorithms is the same as last gen, and the identical TPU is the core that binds Google’s on-device AI capabilities together.

I’d argue that the Tensor G4 shares so many core similarities to the G3 that (virtually) the only reason why Google won’t bring its latest features, such as AddMe and Pixel Screenshots, to the Pixel 8 series, at least not in a hurry, is to upsell the Pixel 9. There’s no denying that bringing such features to the Pixel 8 would make Google’s best-ever flagship even more compelling but might undermine launch excitement about the new models, despite the camera and other hardware upgrades. This raises a big question: Just how committed is Google to backporting features via Feature Drops?

Virtually, the only reason Google won't bring the Pixel 9's latest AI features to the Pixel 8 series is to upsell the Pixel 9.

Google’s history with Feature Drops is pretty hit-and-miss. While it has brought plenty of new features to the Pixel lineup over time, we’re still waiting on some of the bigger promises like Zoom Enhance. There’s no guarantee that all of Google’s latest AI features will even be available for the Pixel 9 at launch, so any hope of features making their way back to the Pixel 8 series in a timely manner feels remote. Still, the similarities between the Tensor G4 and G3 make this all the more possible than in previous years, so here’s hoping that, even if there’s a reasonable delay, we see as many Pixel 9 AI features on the Pixel 8 as possible.

There’s one exception to all this — RAM. On-device AI is RAM heavy; it’s the reason the lower-specced Pixel 8 didn’t initially ship with Gemini Nano, while the 8 Pro’s larger 12GB RAM pool made it possible. According to leaks, the Pixel 9 is expected to ship with 12GB of RAM, and the Pro models will all receive a boost to 16GB. That’s a lot more memory than the baseline Pixel 8, but 12GB matches the capabilities of the Pixel 8 Pro. Once again, then, it looks like the more affordable Pixel 8 stands to miss out, but Google’s previous premium model should be capable of matching the Pixel 9. However, just how broken up Google’s AI feature set will become across models remains to be seen.

The Pixel 8's 8GB RAM might be too small, but the 8 Pro is capable of matching the base Pixel 9.

Even so, a processor with few upgrades cuts through the usual inter-generational barrier and puts Google and the Pixel series in an interesting position. Can Google rely on the superb hardware upgrades alone to sell the Pixel 9 while using this opportunity to show that the best software features can transcend generations? We’re already questioning whether seven years of updates really mean the same thing as seven years of cutting-edge features. Google could put the Pixel series on the map as an evolving platform for the industry’s best AI technology, regardless of which generation you buy in. But it’ll have to sacrifice a little Pixel 9 prestige to do so.

Faster charging means I’m buying the Pixel 9 Pro XL

Opinion post by
Robert Triggs

There aren’t many surprises left with the upcoming Google Pixel 9 series, but I was pleased to learn that the Pixel 9 Pro XL will sport faster charging when it launches in just a few days time. Initially, I was drawn to the obvious appeal of a smaller and more pocketable Pro, but faster charging is seriously pulling me towards the XL.

See, I love my Pixel 8 Pro but it’s still painfully slow to fully charge, despite gradual generational improvements. My phone floats worryingly below the 50% mark most of the time (it’s on 31% right now) — the Battery Saver chime no longer phases me, and I’m no stranger to Google’s Extreme Battery Saver prompt either. Battery anxiety? What’s battery anxiety?

It’s not that the phone’s battery life is bad — far from it. The “problem” is that I only ever leave my Pixel to charge for half an hour or so at a time, but the phone takes about 80 minutes to fill. This isn’t an issue on work days when I have a charger close by, but I’ve lost count of the weekends I’ve silently prayed to the battery gods to extend 10% into a couple more hours.

Robs horrible charging routine

Credit: Robert Triggs / Android Authority

Faster charging means the Pixel 9 Pro XL is the pick for chaotic chargers (like me).

I can go a whole week without my Pixel hitting 100% charge, which I know is abnormal. I could charge my phone overnight like a regular person (sometimes I do), but I’m afflicted with doom-scrolling my way through the night (the curse of a restless toddler) and rolling like a crocodile after eventually passing out from boredom. It’s a recipe for a USB-C necklace.

Clearly, I’m a quick-top-up guy more suited to the blazing-fast speeds of a OnePlus handset than Google’s conservative approach, but I can’t leave that Pixel camera and software behind. I long for a Pixel that can hit 70% after 30 minutes on the plug rather than 50% to keep those Battery Saver notifications at bay. Thankfully, that’s exactly what the Pixel 9 Pro XL promises. Unfortunately, the smaller Pixel 9 and 9 Pro will only hit 55% in half an hour, exactly the same as I recorded for the current-gen models.

C'mon Google, why leave the broader Pixel 9 series stuck with sluggish power levels?

We still don’t know the exact power level, but the info we have suggests that the Pixel 9 Pro XL has roughly 5-6W more peak power than the 8 Pro, so something like 33W. The Pixel 8 Pro pulls around 27W from the wall, by comparison. The only downside is that you’ll need Google’s new 45W charger to hit those levels unless you already have a powerful USB PD PSS plug lying around.

Charging speed in the reportsAdvertised charging speed
Pixel 924.12 W?
Pixel 9 Pro25.20 W?
Pixel 9 Pro XL32.67 W?
Pixel 9 Pro Fold20.25 W?
Pixel 824.66 W27 W
Pixel 8 Pro26.91 W30 W
Pixel Fold22.5 W23 W?

33W is not a huge jump and probably won’t reduce the phone’s time to full by all that much. It’s certainly not going to rival the likes of SuperVOOC-powered phones and maybe not even Samsung’s 45W Galaxy S24 Ultra. I’d still like to see the Pixel series as a whole charge much faster too, but it looks like that’s not happening this year.

Still, a boost to the early stages should leave chaotic chargers like me with more juice in the tank from just a quick top-up. I’d be happy with that. Pixel 9 Pro XL it is then, I suppose.

I’ve spent 48 hours with a Copilot Plus PC and I’m already worried

I was very excited when my Surface Laptop pre-order arrived two days ago, as I’ve been itching to try out these Arm-based, Snapdragon X-powered, Copilot Plus PCs (or whatever you want to call them) since the chipset was first announced in late 2023. Taking the battery-friendly, AI-ready, and ultra-connected benefits of the best smartphones and pairing it with performance that rivals best-in-class laptops sounds too good to be true. Unfortunately, after just 48 hours with the new Surface Laptop, I’m starting to feel that might be accurate.

I should caveat this by stating that the office use element of the Copilot Plus PC experience is perfectly fine, great even. It’s flawlessly powering through writing this article with me, and the battery stats state that I’ve enjoyed two hours and 36 minutes of screen-on time since its last charge, and I still have 76% to go. The battery life on this thing seems pretty rock solid, so at least that’s one promise ticked off the list.

That said, several hiccups in the past 48 hours are undoubtedly pivoting my eventual review in a more negative direction. Namely, app emulation is hit-and-miss, and I don’t really see what all the AI fuss is about, given that Recall is on hiatus until later in the year.

Battery life is great for office workloads, but everything else is less convincing.

But before we get to that, let’s wrestle with this whole running Windows on Arm malarky. Yes, the battery-life benefits seem to be there (though more testing will tell), and the performance of native Arm applications is sublime if you can find them. And that’s the problem: I’m relying a whole lot on Microsoft’s Prism emulator layer to run x64 applications that aren’t yet natively built for Arm processors. Honestly, I’m surprised by how few of the apps I use on a daily basis don’t have native versions. Libre Office, Lightroom Classic, Discord, Asana, and any Steam game (of course) all rely on emulation. I knew my more niche apps from smaller developers, including Feishin and Jellyfin for media, would rely on emulation, but it’s surprising that so few big projects aren’t onboard by this stage. It’s not like Windows on Arm is new.

As for native support, I’ve used Photoshop, Slack, Spotify, Zoom, and the big three web browsers. The latter is where Microsoft gets its “90% of user minutes are running on Arm native” nonsense, but they all run great. Still, I’ve suffered a number of black screen glitches when running GPU-heavy pages in Edge with an external monitor that doesn’t appear with Firefox. Even native apps aren’t immune from issues, it seems.

Let’s be generous and say I have a 50/50 split of Arm and x64 apps installed. The problem remains that emulation performance feels so hit-and-miss. For instance, Lightroom Classic (just update it already, Adobe!) runs flawlessly when editing photos, but exporting JPEGs can bring it and other applications to their knees. On the other hand, Asana and Discord run like an egg and spoon race — stopping, starting, pausing, and loading. This is where Prism’s performance is a letdown; UI elements can temporarily freeze, sometimes system-wide, and I’ve even had music playback cut out for a split second. These issues don’t crop up very often, but when they do, you’re instantly reminded you’re not receiving the best Windows experience out there.

Microsoft Surface 7th gen Snapdragon X Elite CPU taskmanager

Credit: Robert Triggs / Android Authority

But that’s not the cardinal sin. No, the fact that most VPN apps don’t work because they don’t yet have native Arm versions might be an absolute deal breaker for some. I often need a VPN to check out regional website versions, and thankfully, I can still do that in my browser. However, many others have steeper requirements, including those in the enterprise space. Thankfully, VPNs are the only apps I’ve encountered that outright refuse to work.

Now, I’d cut Microsoft and developers some slack if Windows on Arm was a brand-new initiative, but Windows on Arm and Microsoft’s emulator have been around for seven freakin’ years, and we’ve had commercial products for six of them. How are we still discussing app development and emulation problems that Apple has eliminated in about half that time? It’s borderline ridiculous.

Windows has been emulating Arm for seven years, and it's still far from perfect.

OK, enough of the emulator bashing — the Snapdragon X Elite is powerful enough to brute-force its way through (most) of the minor issues. Let’s talk AI — it’s the key marketing material with these Copilot Plus PCs, after all. So what’s the Plus fuss all about? It’s a bit hard to tell. Windows Recall felt like the flagship feature, but that’s put on ice while Microsoft irons out some very warranted privacy concerns.

Without Recall, Copilot takes center stage as the most obvious user-facing AI feature, but the experience feels much the same as on regular PCs. Yes, the dedicated Copilot button to bring up a web app window is a nice touch (if you use AI a lot), but I still don’t trust Copilot (or any other text generator) for anything above mundane questions or reformatting the odd paragraph. With Copilot icons plastered across the toolbar and Edge browser, I’ve probably pressed the physical key three or four times in a couple of days. It hardly seems worth sacrificing good old right ctrl for.

Windows CoPilot Key

Credit: Robert Triggs / Android Authority

Other AI features are onboard, but they’re more niche. I haven’t yet found a use for the admittedly impressive Live Captions feature (yet), and asking Cocreator to draw anything with people in it is often horrifying. Still, I found Studio Effects more useful for a couple of Discord calls. Eye Contact looks a bit creepy, but auto-framing and the bokeh portrait feature work very well. That said, pretty much all conferencing apps have background options baked in without needing an NPU, so it hardly feels new and exciting.

The other AI feature I encountered was purely by accident. While benchmarking some AAA games, I noticed a popup in a couple of titles informing me that AI Super Resolution was activated. If you can live with a measly 1,152 x 768 resolution, AI upscaling pushes several games from sub-30fps to a much more comfortable 50-60fps. Snapdragon X’s ability to play AAA PC games is, surprisingly, not terrible and is probably the best showcase of the built-in NPU elevating the user experience meaningfully. Again, though, the list of supported titles is far from comprehensive, and the settings menu to manually configure .exes is tucked away well out of reach.

Hopefully, Copilot Plus PCs kickstart more meaningful app development for Arm.

And I think that sums up my whole experience with this Copilit Plus PC so far — it doesn’t feel finished. Are incomplete AI features and unpolished emulation acceptable trade-offs for better-than-average battery life? I’m not so sure at prices well over $1,000. I have a feeling that’s my eventual review summed up right there.

Still, perhaps we’re at the tipping point in this chicken-and-egg scenario: more powerful and interesting laptops mean that developers pay attention, kickstarting more native Arm builds, and the whole ecosystem quickly improves. Here’s hoping, but that’s no consolation for the bitter taste of disappointment I’m currently experiencing. The last two days don’t feel all that different from the last seven years of trying to justify the compromises.

Snapdragon X-plained: What you need to know about the chip in CoPilot Plus PCs

Microsoft Surface 7th gen app drawer

Credit: Robert Triggs / Android Authority

Qualcomm’s Snapdragon X series of processors are designed for PCs — well, Windows on Arm, Copilot Plus laptops, to be precise.  They take some of the Snapdragon sauce we are familiar with from high-end smartphones and blends it with the high-performance requirements of the PC space. The aim is to provide a chip with performance that rivals Intel and Apple, with the energy efficiency we’ve become accustomed to in smartphones.

The core ingredients common to all Snapdragon X series chips are Qualcomm’s custom Arm- rather than x86-based Oryon CPU (no Intel or AMD here), a bigger version of its Adreno GPU taken from mobile, Hexagon NPU smarts for AI, and top-tier networking that enables the latest Wi-Fi and 5G standards. Microsoft chips in, providing the emulation layer in Windows on Arm to run x64 applications that haven’t yet been ported to run natively on Arm processors.

Here’s everything you need to know about the Snapdragon X series inside the latest Windows laptops.

Snapdragon X Elite vs X Plus explained

Snapdragon X comes in two major flavors — X Elite, which powers the first wave of top-tier CoPilot Plus PCs, and X Plus, destined for more affordable laptops later in 2024. In total, Qualcomm has four Snapdragon X SKUs (and one unofficial model we leaked) — three under the X Elite branding and one more affordable X Plus unit. There is reportedly an additional low-end X Plus model (X1P-42-100), but we haven’t heard anything official about it yet.

So what’s the difference between Snapdragon X Elite and X Plus, besides their intended price points? Well, Elite boasts 12 Oryon CPU cores versus 10 cores for the Plus. There’s also a smaller eight-core Plus model, which Qualcomm didn’t officially announce. Furthermore, Elite models have higher all-core and two-core turbo clock speeds, up to 4.2GHz, compared to the Plus’ 3.4GHz. This varies by specific model, but the top-tier Elite models pack the Apple M-series rivaling performance with higher power consumption to boot.

Snapdragon X Elite and Plus Comparison Table

Credit: Qualcomm

The top-tier X1E-84-100 SKU also has a more powerful GPU than all the other models, hitting 4.6 TFLOPS vs 3.8 TFLOPS for the standard Ardreno GPU setup. This is thanks to a higher GPU frequency of 1.5GHz, up from 1.2GHz.

Fortunately, all of the Snapdragon X models sport the same 45TOPS Neural Processing Unit (NPU), ensuring they’re all capable of running the same AI features. If you’re unfamiliar, an NPU augments traditional CPU capabilities with machine learning (AI) specific number crunching capabilities. Not only is an NPU faster, but it’s more power efficient too.

NPUs are purpose-built to handle machine learning workloads for CoPilot Plus. Every Snapdragon X chip has the same one.

The series all support LDRR5X memory at 8448MT/s too, 4K120 video decoding, and 8+4 lanes of PCIe 4.0 for storage and the like. All except the unofficial X1P-42-100, which supposedly drops to 4K60 decode and 4+4 PCIe 4.0 lanes. The range is manufactured using TSMC’s N4 process and supports Wi-Fi 7, Bluetooth 5.4, and 5G networking, with a discrete modem attached.

The bottom line is that CPU performance is the big differentiator between the Snapdragon X line. There’s a showcase X Elite chip that pushes performance on the CPU and GPU front (no doubt the model the benchmarkers will want), but without knowing the TDP, this might not be the most interesting chip in the range. The other Elite chips are more conservative on clocks and power, whilst the Plus steps performance down just a little with a smaller CPU configuration.

Snapdragon X – Oryon CPU deep dive

Speaking of CPUs, perhaps the most interesting aspect of the Snapdragon X series is Qualcomm’s in-house Oryon CPU. I say Qualcomm’s CPU, but the company bought Nuvia for $1.4 billion in 2021, which had started work on a custom Arm CPU for data centers called Phoenix. That work would quickly become Oryon for Windows on Arm devices.

The most interesting thing about Oryon is that it’s not based on the x86/x64 architecture that PC stalwarts AMD and Intel use. Instead, Oryon is built on the Arm architecture (Armv8.7-A, to be precise) found in smartphone processors and Apple’s M-series of laptop chips. However, the latter are now on Armv9, which introduces additional important features.

Oryon is an Arm-based CPU, rather than x86/x64 like rivals Intel and AMD.

Anyway, let’s start with the high-end topology. Snapdragon X uses three clusters of up to four cores (though it can technically support eight cores in a cluster). Unlike smartphones, there aren’t separate performance-optimized and efficiency-optimized CPU cores. There’s no Arm-style big.LITTLE or Intel-type low-power E-cores; every Oryon core is the same micro-architecture-wise. However, it’s likely that different clusters have different peak frequencies to balance power consumption. For instance, we know that two CPU cores in different clusters can push the peak boost clocks.

Each cluster shares its L2 cache, which is 12MB in size. This means that four cores share access to a large pool of local memory for multi-threaded performance. Cluster-to-cluster snooping is implemented when a CPU group needs to grab data from another. There’s also a smaller 6MB L3 cache as part of the shared memory subsystem across clusters, GPU, and NPU, with a minimal 6-29 nanoseconds of latency for fast access. Altogether, that’s a hefty memory footprint in the vein of the Apple M series (Apple is estimated to use even bigger caches) and is likely key to Qualcomm reaching a similar level of performance.

Qualcomm Oryon CPU core

Credit: Qualcomm

Peeking inside each core, Oryon provides six integer number crunching units, four floating point units (two with multiply-accumulate for machine learning workloads), and four load/store units. Importantly, each FP unit supports 128-bit NEON for number crunching on smaller data sizes right down to INT8, but not as small as INT4 used by some highly compressed smartphone machine-learning models. This helps mitigate the lack of SVE (introduced in Armv9) and the wider pipelines that we see in modern AMD and Intel chips. Still, that’s a pretty big CPU that’s a smidgen larger (execution-wise) than the latest Arm Cortex-A925 destined for 2025 smartphones.

No efficiency-cores here, Snapdragon X goes all in with up to 12 big CPU cores.

Keeping that CPU core fed is a major task. Qualcomm accomplishes this with a large 192Kb L1 instruction and 96KB data cache, paired with 8 instructions per cycle decoding. The re-order buffer hits a huge 650 micro-ops (or larger), allowing for a frankly huge out-of-order execution window (think of this as a queue of little instructions the processor could run).

Jargon aside, keeping a big core running with things to do and powering off when it’s not in use is the key to robust power consumption. You want to avoid situations where the core is on but suffers a “bubble” without instruction to process. The aim of having so many instructions sitting around within easy reach is that there’s always something it could be doing. However, historically, there’s been a diminishing return for storing so many instructions that are simply waiting, but this doesn’t seem to apply for modern Arm chips. For comparison, the Cortex-X925 has a 750 micro-op re-order buffer for a 1,500 out-of-order window, but Intel’s Lunar Lake stores just 416 entries.

Anyway, the TLDR is that the Snapdragon X’s Oryon CPU has a pretty big core paired up with tons of memory to keep it running at full tilt when needed. That’s likely to produce solid performance, but all that memory costs a small fortune in silicon area, hence why this is a premium-tier product.

Adreno graphics explained (finally)

Those familiar with Snapdragon will recognize the X-series’ GPU — the Adreno X1 is a bigger version of Qualcomm’s mobile GPU. Usually, Qualcomm doesn’t spill the beans on its graphics architecture but has opened up a lot more about the Adreno X1 as it dukes it out with bigger GPU names in the PC space.

At a high level, the Adreno X1 supports many key desktop-class GPU features, including DirectX 12.1 (not 12.2), DirectX 11, OpenCL 3.0, and Vulkan 1.3 feature sets. This includes ray tracing (via Vulkan) and variable rate shading, which are essential in modern PC titles and are slowly gaining traction in mobile.

Qualcomm levels up its Adreno GPU from mobile, making it a solid competitor for Intel's integrated graphics.

The Adreno X1 is built for both tile-based rendering (binned mode), typically seen in smartphones, and direct rendering that is more associated with the PC space. The difference is that a tile-based approach splits up the scene into smaller sections, keeping local data in the local cache to reduce power consumption. A binned-direct mode also attempts to leverage the best of both, leveraging a local high-bandwidth 3MB SRAM. The mode of operation is determined by the graphics driver and Qualcomm calls this rather unique setup FlexRender. The idea here is that the X1 can benefit from mobile-style power consumption or PC-class performance, depending on what best suites.

Regardless of the mode of operation, the Adreno X1 features six shader processors with 256 32-bit floating point units each, for a total of 1536 FP32 units. Peering deeper into each shader processor, one can see two micro-shader/texture pipelines with their own scheduler and power domain. Each comprises a 192KB L1 cache, a texture unit running at eight texels per clock, 16 elementary functional units (EFUs) for advanced math functions, 128 32-bit ALUs, and 256 16-bit ALUs.

Adreno X1 Shader Processor Explained

Credit: Qualcomm

That latter part is important; the core can run FP32 and FP16 operations concurrently, and the FP32 ALUs can pitch in for even more 16-bit data crunching if required. Speaking of number formats, the 32-bit ALU supports INT32/16, BF16, and INT8 dot products, making it adept at matching learning workloads. The 16-bit ALUs also support BF16, which is handy for ML.

Another interesting point is that Qualcomm uses a large wavefront (parallel operations) size compared to rivals AMD and NVIDIA. 32-bit operations arrive in groups of 64, while 16-bit operations stream in 128 at a time. Very wide designs typically suffer from bubbles where the core runs out of things to compute (rivals AMD and NVIDIA use 32 wide wavefronts for 32-bit operations), which is bad for power efficiency. Perhaps Qualcomm mitigates this intelligently, powering down its micro-shader cores.

In terms of performance, we ran Crysis on the Snapdragon X Elite but had to comprise with a 720p  resolution and medium graphics to achieve semi-decent frame rates. Other titles can leverage Microsoft’s new Automatic Super Resolution technology to improve frame rates in supported titles, including  The Witcher 3 and Hitman 3. The trade-off is you’re limited to a very low 1,152 x 768 pixels. This certainly isn’t a gamer’s chipset, but you can achieve decent frame rates with some heavy compromises.

For a quick comparison, an entry-level laptop gaming GPU like the NVIDIA GTX4050 packs 13.5TLOPS of FP32 computing, which is almost three times the performance of the Adreno X1. Instead, the X1 looks more competitive with Intel’s latest integrated graphics parts, which range between 2 and 8 TFLOPS. However, Snapdragon X1 has the added complication of emulating games compiled for x64. Speaking of…

What you need to know about Windows on Arm emulation

Windows logo on laptop stock photo (16)

Credit: Edgar Cervantes / Android Authority

While Arm CPUs can deliver high performance and remarkable energy efficiency, this transition brings new problems in the form of supporting legacy applications.

Windows has historically run on x86 and x64 platforms of AMD and Intel, meaning the low-level CPU instructions that OS applications run on a CPU aren’t supported by Arm. Microsoft rebuilt Windows on Arm to support the core OS on Arm CPUs and has released developer tools to help developers compile native Arm applications more easily.

Running older apps that aren't Arm-native? You'll take a (small) emulation performance penalty.

This has paid off somewhat over the past seven years of the project; Microsoft says that about 90% of “app minutes” a user spends time with daily has a native Arm application (likely because of web browsers). However, there are still swathes of modern and legacy Windows applications that aren’t yet Arm-native.

Windows on Arm has long run an emulator that converts code in real-time to support these apps. That ensures that software works but comes with a hit to performance, particularly for demanding real-time applications, like video conversion and gaming, and those requiring specific instructions like AVX2. Microsoft calls this hit “minor,” but previous Snapdragon chips have suffered. We’ll have to see if it’s much improved with the more powerful X-series of chips.

Fortunately, just before CoPilot PCs arrived, Microsoft’s updated emulation layer (now called Prism) claimed 10% to 20% additional performance for existing Arm chips (like the older Snapdragon 8cx). We tested the emulator’s performance on the 8cx before and after the update; here are the results:

  • Firefox (Speedometer 3): +10%
  • Cinebench r23 (Single-core): +8%
  • Cinebench r23 (Multi-core): +4.5%
  • HandBrake (h.264 software encoding time in seconds): +8%

Lofty claims of 20% improved performance are clearly the outliers, but these are still pretty decent gains for applications that still rely on emulation.

While the software emulation problem is more in Microsoft’s hands than Qualcomm’s, the latter has built features into its Oryon CPU to assist with memory store and floating-point architectures for x86 that should further boost emulation performance. If Qualcomm moves to Armv9 with its next-gen laptop CPU, SVE support will also help improve performance for instructions that require wider vector widths. We expect emulation performance to be pretty OK and will likely improve in the coming years.

Should you buy a Snapdragon X / CoPilot Plus PC?

Microsoft Surface 7th gen homescreen

Credit: Robert Triggs / Android Authority

In addition to pure specifications, there are many features to consider when looking up the first wave of CoPilot Plus PCs. First and foremost, the addition of an NPU means these laptops benefit from exclusive Windows features but will have to wait a while before Recall re-debuts.

As we’ve seen, Snapdragon X promises competitive performance with Intel’s latest chips and the powerhouse Apple M3 (though perhaps not quite the newer M4). On top of that, battery life should last well in excess of a busy workday, setting these laptops up as true MacBook competitors. Perhaps the biggest unknown, though, is just how well x64 applications will hold up under emulation.

The first reviews are rolling in as we speak, so it won’t hurt to wait a few more weeks to see if the Snapdragon X Elite and CoPilot Plus PC are worth your hard-earned cash.

Gaming on Snapdragon X: Can it run Crysis?

Snapdragon X benchmarks can it run crysis

Credit: Robert Triggs / Android Authority

With mobile smarts ramped up to suit a powerhouse PC form factor, Snapdragon X processors are an interesting prospect for mobile and laptop aficionados alike. I’m lucky enough to have my hands on the new Microsoft Surface Laptop, complete with the Snapdragon X Elite (1xE80100) onboard, and I couldn’t resist seeing if this chip could handle a little bit of AAA gaming (off the clock, of course, boss).

As a parent of two, my Steam library needed a little dusting off, but it’s not like Qualcomm is positioning the Snapdragon X platform at hardcore gamers anyway. Its Adreno X1 GPU is still an integrated component, after all, with a lowly 3.8TFLOPS of compute on this model that puts it well behind discrete mobile cards, let alone beefy desktop GPUs. Instead, Copilot Plus PCs are marketed for their AI smarts and battery life. Still, between modern classics including GTA V, Hitman 3, Crysis Remastered, and The Witcher 3, I feel like I have a reasonable sample of games you might be tempted to boot up on the go. Let’s find out if they can actually run.

Before we jump into the benchmark results, there are some important things to note. First, all of these games (and, in fact, anything you run through Steam or other launchers) are currently compiled for x64 processors (see AMD and Intel), not Arm (see Snapdragon X). That means Windows secretly spins up its Prism emulation layer to get these games running, which incurs a performance penalty. How much? We’ll just have to see.

Secondly, I spotted that some of these games ran with Windows’ “Automatic Super Resolution” (Auto SR) enabled by default. This new feature is exclusive to Copilot Plus PCs, leveraging their NPU capabilities to upscale low-resolution rendering for better performance. We’ll discuss this a bit more later on, but the key thing to note is that it lowers the output resolution on the Surface Laptop I’m using to just 1,152 x 768.

Snapdragon X Elite gaming benchmarks

My expectations for the Adreno X1 GPU are firmly in check, so I started by setting all these games to medium graphics settings. GTA V and Witcher 3 have SSAO enabled, but I declined anti-aliasing and ray tracing in every game where possible in favor of extra frames. The results below track the minimum and average frame rates across these game’s benchmark apps (and a fairly brisk run around some enemies in The Witcher 3). First, let’s run the tests at the Surface Laptop’s native 2,496 x 1,664 resolution (or as close as possible in the case of the Crysis benchmark).

Snapdragon X Elite Gaming Benchmarks High Res

Credit: Robert Triggs / Android Authority

As you might have expected, the results are not great even with medium settings. Crysis hobbles the Adreno X1 at this resolution, with an average FPS of just 19. Grim. Hitman is barely any better, with low average and minimum frame rates that make it a choppy mess to play. The Witcher 3 is slightly more playable but dips below 30fps far too regularly to be enjoyable. Likewise, GTA V has a much higher average frame rate, but the game grinds to a slideshow when heavy physics is employed. Clear, AMD and NVIDIA aren’t going to lose any sleep over the Snapdragon X Elite.

Auto SR is a Copilot Plus PC's secret weapon to run AAA games on low-power graphics.

At native resolutions, frame rates are a pretty dismal sub-30fps affair that’s headache-inducing to play for more than a few minutes. However, Copilot Plus PCs have a trick up their sleeve — Auto SR. This runs at a very low resolution, making it a bit backward compared to rival technologies that are regularly employed to output 4K. Obviously, a low resolution will run much better than the native display pixel count, so I ran all the games again at 1,152 x 768 (or 720p if that wasn’t a supported option) and then re-ran them with this AI-powered super-resolution scaling technology enabled.

Snapdragon X Elite Gaming Benchmarks Low Res

Credit: Robert Triggs / Android Authority

So can the Snapdragon X Elite run Crysis? Unfortunately, Windows’ super-resolution feature doesn’t support Crysis, so there’s no benefit to be had here, much to my immeasurable disappointment. Still, it technically runs, and at an OK 41fps average at 720p, which is no doubt faster than when I played the original on my beloved ATi X1950 Pro. You’ll have to settle for low settings if you want something approaching a silky smooth frame rate, but the game runs passably on Qualcomm’s Snapdragon X Elite even without Auto SR support. Not terrible.

Of course, the other games in our list benefited from dropping the resolution back too, hitting frame rates above 30fps and in some instances closer to 60fps. Still, flicking the AI Auto SR switch yields even more frames. Well, at least most of the time.

Auto SR can provide a substantial boost to both minimum and average FPS.

GTA V sees the smallest change to its average frame rate, which sits well above 60fps anyway. However, minimum fps leap up by 46%, making for a smoother ride, but it’s still hampered by CPU-dependent scenes. Conversely, the more GPU-intensive Witcher 3 sees little change to its minimum fps but a 30% boost to its average frame rate, taking it up to the comfortable 60fps mark. Hitman 3 is more of a mixed bag. The Dubai benchmark sees a whopping 60%-odd gain to its minimum and average fps by turning to this AI upscaling technology, which again makes the game far more playable than by just dropping the resolution alone.

Now, I ran both Hitman tests because Dartmoor is incredibly physics-heavy, stressing any decent CPU and applying even more pressure when running under emulation here. This explains the super low minimum fps results we see regardless of whether super-resolution is enabled or not. So, Auto SR clearly helps out in GPU-bound instances, but it can’t improve frame rates for CPU-bound scenes. Still, Hitman runs pretty well in real gameplay when using AI upscaling.

AI Super Resolution to the rescue?

Windows Automatic Super Resolution Settings menu

Credit: Robert Triggs / Android Authority

Windows’ not much talked-about Automatic Super-Resolution feature is a bit of a silver bullet for Copilot Plus PCs — and their Snapdragon X chips inside. Flip the switch, and these laptops feel like much more capable gamers than they first appear. While obviously still not able to deliver truly high-end graphics options, frame rates, or native resolutions, it’s a brilliant addition for sneaking in a game on the go. Leaping from sub-30 to 45-60fps makes a world of difference to playability and turns your sensible work laptop into something a little more fun.

So how does it work? According to Microsoft, “Auto SR functions by automatically lowering the game’s rendering resolution to increase framerate, then employs sophisticated AI technology to provide enhanced high-definition visuals.” With that in mind, it helps to think of this as NVIDIA’s DLSS or AMD’s FSR in reverse. It’s more like a cross between variable rate shading and AI-enhanced super resolutions details.

See, Windows Auto SR doesn’t upscale a game to match a high-resolution display. In fact, you have to settle for a resolution that’s far below typical modern gaming targets of 1440p and 4K. There’s no getting around the fact that 1,152 x 768 doesn’t look particularly sharp, even on the Surface Pro’s modest 15-inch screen. Aliasing artifacts are abundant and you’ll have to give up some of those valuable frames you just clawed back if you want rid of them.

Auto SR is the inverse of NVIDIA's DLSS. It runs at a low resolution and bumps down even lower to improve fps.

Instead, what I believe is happening is that the rendering resolution is sometimes even lower than 1,152 x 768. AI is used to scale up these frames so you can’t see the difference, which explains why this technique is quite good at improving minimum fps values in GPU-bound games. I suspect the overall low resolution is ultimately a limitation of the 45TOPS of NPU power found in Copilot Plus PCs. NVIDIA’s DLSS, for instance, runs on much more powerful hardware to reach 4K. To Microsoft’s credit, I really couldn’t tell if or when this was happening. Every still I captured looked the same with AI on or off and it’s even harder to make out any changes that happen during motion.

That said, Auto SR exacerbates jankiness when frame rates fall very low. This was readily apparent in the Dartmoor benchmark; several runs temporarily slipped into a Matrix-style Deja Vu. Despite the higher frame rates, I’m not convinced any of these titles felt buttery smooth with Auto SR enabled. It also makes text distort into drunken fonts, which can give the appearance of a goofy SNES emulator upscaler.

The Witcher 3 - Auto SR Off The Witcher 3 - Auto SR On The Witcher 3 - Auto SR Off
The Witcher 3 - Auto SR On

Still, running graphics upscaling on a chip’s dedicated NPU (necessary to be classed as a Copilot Plus PC) is an inspired idea, as it doesn’t steal many, if any resources from the GPU. In this instance, Auto SR leverages Qualcomm’s Hexagon NPU that sits alongside the Adreno X1 GPU inside the Snapdragon X chipset, but this technique should work on future AMD and Intel PCs with integrated NPUs too. To be honest, I’m surprised we haven’t seen something similar in the Android gaming space, given the hardware is already there and pixel requirements are much lower.

You can find a list of Auto SR compatible titles at this link, which includes newer titles like Cyberpunk and The Last of Us. I can’t be sure these demanding titles will run quite as well without testing them, but they should still see some benefit. Unfortunately, not every game supports or even works well with Automatic Super-Resolution enabled. GTA V, for example, ships with super-resolution off by default, and while enabling it drastically improves the frame rate, menus and UI elements flicker and sometimes black box completely. On the other hand, Crysis Remastered and many other games don’t support the feature at all, so this certainly isn’t the cure-all for gaming on Arm-based PCs. The experience is not completely ready for prime time, much like Windows Recall.

Equally, I’m not entirely convinced by the decision to have this enabled by default for some games. It deprives gamers of balancing resolution and graphics settings themselves. The fact that the setting is hidden away in the new “Display > Graphics” menu also means yet another panel for gamers to faff with before they can just get playing. Though Windows does display a nice prompt to let you know Auto SR is working when you boot up the game.

Still, who can turn their nose up at free extra frames? Auto SR can’t disguise Snapdragon X as a serious rival to a proper gaming laptop, but it does mean that Copilot Plus PCs can dabble in a little light gaming without framerates that’ll tank your W/L ratio.

I’d given up, but the Snapdragon X launch may have saved Windows on Arm

Windows logo on laptop stock photo (17)
Credit: Edgar Cervantes / Android Authority
Opinion post by
Robert Triggs

If you’ve tuned into Computex this week, the headline act has been the debut of numerous Windows on Arm laptops powered by Qualcomm’s long-awaited Snapdragon X Elite and X Plus platforms. You might not think it based on the launch hype, but this exclusive collaboration between Microsoft and Qualcomm to bring Arm energy efficiency to Windows is actually entering its seventh year. Finally, the long-touted (if not a tad overblown) benefits of leaving x86 behind seem to have arrived.

I’ve dipped my toes into the Arm project on a couple of occasions. First with 2018’s Lenovo Mixx 630 powered by 2017’s flagship smartphone-class Snapdragon 835 processor. Performance was pretty poor for laptop use cases, with sluggish multi-tasking and poor emulation of non-Arm-based applications. The ecosystem of native Arm apps was absolutely rubbish back then, severely limiting the platform’s appeal. I still have this little 2-in-1 around, but it’s such a slog to use that it’s sat collecting dust. Still, those were the early days, and Windows on Arm would eventually improve.

Windows is at a crossroads, Microsoft has to decide if it wants to be more like Apple or Android

https://www.youtube.com/watch?v=taVznf-giBk

Microsoft’s Windows has just arrived at a crossroads. The status quo is being challenged by a mainstream move into the Arm architecture and a heavy push for AI as a major feature of Microsoft’s software ecosystem. But while these Copilot Plus laptop features sound exciting, our admittedly short hands-on with Recall suggests many are only skin deep. What Windows really needs to maximize the potential of this pivotal shift is a more substantial software refresh. Yes, those control panel/settings duplications are already well documented (and should be fixed); what I’m talking about is a more fundamental revamp of the core focus of the Windows OS. About the level of what Microsoft wanted to do with Windows 8, only much better and suited to next-gen devices that consumers actually want.

The problem is that such a move could fundamentally shift Windows to a more walled-garden Apple-style approach rather than the (mostly) Android-esque openness that has fostered a platform diverse enough to cater to both gamers and professionals at once. I’m not even sure it’s something I really want, but sitting on the fence isn’t going to move the platform forward.

Deep dive into Arm’s new Cortex-X925 and Immortalis-G925 for mobile

Arm Cortex A925

Mobile chipset development continues to advance at a brisk pace, bringing us superior gaming performance, accelerating the latest AI features, and more power-efficient PCs. Arm, one of the companies charting the course, has announced its 2024 selection of CPU and GPU cores to power these growing use cases.

Some (but not all) of 2025’s next-generation top-tier smartphones will be powered by Arm’s newly announced cores. Arm has been giving out fewer details on its CPU and GPU technologies in recent years, but let’s examine the announcements in closer detail to see what we can expect.

The big one: Arm Cortex-X925 core

The flagship CPU in Arm’s 2024 portfolio is the powerhouse Arm Cortex-X925. Despite the name change, this is the direct successor to last generation’s Armv9.2 Cortex-X4 found in processors like the Qualcomm Snapdragon 8 Gen 3. We had anticipated this core to be called the Cortex-X5, but Arm has changed the moniker to match other products in this year’s portfolio.

Headline figures for the Arm Cortex-X925 include a 15% higher performance IPC improvement over the Cortex-X4. This extends to 36% once the gains from moving to 3nm manufacturing, higher clock speeds in excess of 3.6GHz, and larger caches are factored in.  AI performance sees even bigger potential gains, running some models 46% faster on the CPU than the X4. The bottom line is that single-core CPU capabilities will see a significant uplift next-gen.

Cortex-X925Arm Cortex-X4Arm Cortex-X3Arm Cortex-X2
Peak clock speed~3.6GHz~3.4GHz~3.25GHz~3.0GHz
Decode Width10 instructions10 instructions6 instructions
(8 mop)
5 instructions
Dispatch Pipeline Depth10 cycles10 cycles11 cycles for instructions
(9 cycles for mop)
10 cycles
OoO Execution Window1,500
(2x 750)
768
(2x 384)
640
(2x 320)
448
(2x 288)
Execution Units(assumed)
6x ALU (some 2-cycle)
2x ALU/MAC
2x ALU/MAC/DIV

3x Branch
6x ALU
1x ALU/MAC
1x ALU/MAC/DIV

3x Branch
4x ALU
1x ALU/MUL
1x ALU/MAC/DIV

2x Branch
2x ALU
1x ALU/MAC
1x ALU/MAC/DIV

2x Branch
ArchitectureARMv9.2ARMv9.2ARMv9ARMv9

The gains from 3nm are an important part of the performance uplift expected for this generation. Arm has worked extensively to optimize its design for its partners on both FinFET and GAA processes (aka TSMC and Samsung). That leaves the 15% like-for-like improvement over the previous model, which comes down to several key changes in the X925’s microarchitecture.

In the processing core, for example, the X925 now has six SIMD units (the powerful number crunchers that batch compute floating point math and AI workloads) up from four, allowing them to do more heavy math in parallel.  This likely accounts for most of the core’s AI/ML performance boost. There’s also an additional integer multiply unit and extra floating point compare unit, which again increases the core’s sheer number-crunching capabilities when fully fed. Arm is reluctant to discuss die area size these days, but the X925 must be getting pretty big.

Arm Client 2024 CPU Reference Cluster

Credit: Robert Triggs / Android Authority

Another interesting change is that some of the ALUs have been switched to dedicated 2-cycle instruction versions. This helps avoid stalls in the regular 1-cycle units but presumably means that these ALUs can’t perform some of the simpler arithmetic. This seems like the sort of design change that only intricate use-case data would allude to.

Instruction dispatch remains 10-wide, but Arm has doubled the X925’s maximum number of instructions in flight, now a colossal 1,500. Likewise, there’s twice the L1 instruction cache bandwidth and double the L1 instruction lookup table size to speed up instruction fetching. Meanwhile, the backend consists of an extra load pipeline to bring more data in from memory. In other words, there are plenty of out-of-order instructions floating around to keep those number-crunching cores busy.

That’s a lot of jargon, but the themes are very familiar from previous years— an ever-wider front end feeding an increasingly insatiable execution engine. In that sense, the X925 is an update to the X4 rather than a wholesale redesign. Even so, performance will take a solid leap forward again in 2025, though a fair chunk of the benefits also come from the move to 3nm.

Power-efficient Arm Cortex-A725 and A520

Sadly, Arm hasn’t provided as many details about the equally important Cortex-A725 — the new middle core that’ll form the backbone of upcoming mobile SoCs.

Arm claims that the A725 is 25% more efficient than the A720 and offers the option for higher peak performance if required. Again, though, this implies the move to 3nm, and Arm hasn’t given us a standard metric for IPC performance gains. However, it claims a 20% boost to L3 traffic, which helps realize some extra performance.

On the microarchitecture level, Arm increased the re-order buffer and instruction issue queue sizes, improving throughput. A new 1MB L2 cache configuration also allows the core to reach a higher performance level. But if that’s it, the A725 is a minor revision of the A720, which was already an optimization of 2022’s A710 core.

Arm Cortex A725 efficiency graph

Credit: Robert Triggs / Android Authority

This leads us to the refreshed Cortex-A520, certainly the least exciting model in this year’s CPU trio. The core architecture remains unchanged. Instead, Arm has optimized the A520 footprint for upcoming 3nm processes, resulting in 15% energy-efficient gains.

Looking at Arm’s power efficiency curves, this generation has an even greater crossover between the Cortex-A725 and A520. While the A520 can still reach the very lowest power levels for standby and low-clock tasks, the A725 can deliver vastly more performance for the same power as a maxed-out A520. In other words, many tasks run much faster and just as efficiently on the A725. It’s little wonder that Arm’s 2024 reference design suggests just two A520s, further reducing the number of small cores from what we see in current-generation chipsets.

Vastly improved gaming with the Immortalis G925

Arm continues to upgrade its GPU line-up too, with the Immortalis G925, Mali G725, and Mali G625. As with last year’s range, silicon partners need to use a larger core count to ensure robust ray tracing performance and leverage the Immortalis branding. Ten to 24 cores, up from 16 last gen, is classed as Immortalis, six to nine for a G725 implementation, and one to five cores for a budget G625 setup.

Regardless of the configuration, each G925 core promises a 30% reduction in power consumption when built on 3nm, up to 37% improved performance, and a whopping 52% gain in ray racing over last-gen’s Immortalis G720. That last metric has a big caveat: it requires developers to leverage new APIs to designate targets as “intricate objects,” which the G925 then traces with reduced fidelity. Think leaves or grass that is very expensive to compute individually but that players won’t notice if ray traced at lower accuracy. It’s a neat idea, but entirely dependent on developers knowing about and then coding for.

Arm Immortalis G925 Performance

In real-world games, Arm is claiming even more significant gains with 14 Immortalis G925 cores versus 12 older G720. Of course, that’s not a like-for-like comparison, so take it with a pinch of salt. But giving Arm the benefit of the doubt, I’d guess that you can fit 14 G925 cores in the space of 12 of the previous G720s, but that’s entirely my speculation.

Still, for just two more cores, Arm touts a 72% performance improvement in Call of Duty, 49% in Genshin Impact, 46% in Diablo Immortal, and a 29% gain in Fortnite. The key likes in the core’s new Fragment Prepass technique. The TLDR is that this vastly improves hidden object culling (think a player or object hidden behind a wall), reducing CPU load for these big performance gains. Games with complex geometry benefit most, hence the performance differences between CoD and Fortnite.

If you want a more in-depth explanation, Arm has replaced the traditional Z-buffer Hidden Surface Removal (HSR) technique, like forward pixel kill or primitive re-ordering, with its fragment prepass technology. The key difference is that it removes the need to re-order the Z-buffer (depth buffer) to make culling decisions, reducing driver CPU cycles by up to 43% per thread. This is all done in hardware, meaning there is no overhead for developers, but it doesn’t benefit all games equally.

What about AI?

Arm Immortalis G925 Machine Learning

No 2024 announcement is complete without AI, and Arm had a fair bit to say here despite not having a dedicated AI accelerator to augment its more traditional CPU and GPU parts. Instead, Arm is banking on the more developer-friendly and universal appeal of the CPU and, to a lesser extent, the GPU to tout its AI capabilities.

For instance, Arm points out that most third-party AI Android apps run on the CPU rather than an accelerator, as few have invested the development resources to support the numerous SoC API platforms. In lieu of a more universal API, Arm is banking on the CPU to remain an essential component for AI. That said, this is much easier to say when you don’t have skin in the mobile AI accelerator market.

Still, Arm has some performance numbers to trot out here. The Arm Cortex-X925 boasts a 42% faster time to first token with an 8-billion LLaMA 3 model and 46% faster for a 3.8 billion Phi 3 model. AI CPU inference is also up 59% compared to the Cortex-X4, with GPU inference capabilities receiving a 36% boost over last year’s reference platform. Similarly, the new GPU (in a 14-core versus 12-core configuration) is up to 50% faster in natural language processing, 41% faster in image segmentation, and 32% faster for speech-to-text.

Those are all very welcome improvements to help make AI apps more responsive, but it’s worth remembering that neither a CPU nor a GPU is as fast and efficient as a dedicated AI accelerator.

What to expect from next-gen products

Samsung Galaxy S24 homescreen in hand

Credit: Robert Triggs / Android Authority

Arm’s next-gen cores are destined for 2025 flagship smartphones, with Samsung and MediaTek likely to be the biggest mobile silicon vendors to leverage these cutting-edge technologies. Qualcomm is moving to a new custom CPU core for the Snapdragon 8 Gen 4, which means that the majority of flagship Android phones in 2025 probably won’t use Arm Cortex-X925 or Immortalis-G925.

Likewise, the upcoming major wave of Windows on Arm laptops are all powered by Qualcomm’s Snapdragon X Elite platform. Again, this platform uses custom CPU cores rather than Arm’s Cortex. Arm didn’t have much to say about specific plans for Arm-based PCs, likely given Qualcomm’s exclusivity deal with Microsoft, which is rumored to end in 2024. Still, it’s entirely possible that we might see other silicon vendors use Arm Cortex-X cores, quite possibly the new X925, for rival chipsets at some point in 2025. For instance, Arm envisions a PC chip with up to 12 Cortex-X925 CPU cores to push performance well beyond mobile.

Although Arm announced its latest client technologies in the first half of the year, partner chipsets will be announced near the end of 2024,  at the earliest. Smartphones powered by the Cortex-X925 and/or Immortalis-G925 are expected to land in consumer hands in early 2025.

Google Pixel 8a charges slower than the Galápagos tortoise runs the 400m

Many are (quite rightly) scratching their heads over the seemingly small differences between the Google Pixel 8 and the newer Pixel 8a. However, one of the few reasons to pick the regular Pixel 8 over the more affordable 8a is charging times.

We tested it, and it takes about 100 minutes to fully charge the Google Pixel 8a. While it was charging, I did some research and discovered that the leisurely Galápagos tortoise moves, on average, at about 0.26 kilometers an hour, or 4.3 meters per minute. In the time it takes the Pixel 8a to charge, a particularly determined tortoise could finish a 400m race with a few minutes to spare. Talk about slow.

AI is already messing up the era of long-term smartphone updates

Opinion post by
Robert Triggs

When Samsung announced Galaxy AI alongside the Galaxy S24 series, one of the first questions on everyone’s lips was: will older phones receive these awesome features? After all, Samsung has one of the industry’s leading update policies, with modern handsets receiving five to seven years of major updates, raising expectations for prompt, long-term feature support.

After just a couple of months, Samsung rolled out its One UI 6.1 update, complete with Galaxy AI, to the Galaxy S23 series and the Tab S9 family, as well as foldables like the Galaxy Z Flip 5 and Z Fold 5. So far, so good — last year’s handsets run Galaxy AI just as well as the newer S24 series. However, it does beg a whole other question: why upgrade? Samsung stated that older handsets with long update commitments would receive a similar update in due time.

AI is already messing up the era of long-term smartphone updates

Opinion post by
Robert Triggs

When Samsung announced Galaxy AI alongside the Galaxy S24 series, one of the first questions on everyone’s lips was: will older phones receive these awesome features? After all, Samsung has one of the industry’s leading update policies, with modern handsets receiving five to seven years of major updates, raising expectations for prompt, long-term feature support.

After just a couple of months, Samsung rolled out its One UI 6.1 update, complete with Galaxy AI, to the Galaxy S23 series and the Tab S9 family, as well as foldables like the Galaxy Z Flip 5 and Z Fold 5. So far, so good — last year’s handsets run Galaxy AI just as well as the newer S24 series. However, it does beg a whole other question: why upgrade? Samsung stated that older handsets with long update commitments would receive a similar update in due time.

Samsung Galaxy A55 5G rumors: Expected release date and what we want to see

Samsung Galaxy A54 5G white back in hand
Galaxy A54 5G camera trio
Credit: Robert Triggs / Android Authority

Update, March 6, 2024 (7:35 AM ET): We’ve updated this Samsung Galaxy A55 5G rumor hub with more leaked renders and specs via a reliable leaker.


Original article: Samsung’s flagship smartphones might grab the headlines, but it’s hard to go wrong with the brand’s more affordable Galaxy A54 5G. Sporting a high-end design, robust everyday performance, a wonderful camera, and an upgrade pledge that can’t be beaten, it’s not just a tremendous affordable Android phone but a solid all-around pick.

Tested: The Exynos Galaxy S24 is better than we expected

The return of Samsung’s Exynos chipset in the Galaxy S24 series once again leaves us with a high-end flagship series sporting a key hardware difference depending on where you reside. To recap, all Galaxy S24 Ultra owners and S24 and S24 Plus customers in the US, Canada, China, Taiwan, and Hong Kong all receive the Snapdragon 8 Gen 3 for Galaxy chipset. Customers who buy the Galaxy S24 or S24 Plus in Europe, the UK, India, and other regions receive the Exynos 2400 for Galaxy.

We’ve already covered the low-level Snapdragon 8 Gen 3 and Exynos 2400 specifications; today, we’re diving into our test results to explore what differences, if any, there are to be found when using these phones. To make the test completely fair, we’ve grabbed both Exynos and Snapdragon versions of the regular Galaxy S24, ensuring consistent battery capacity, display specs, and other hardware for our side-by-side comparison.

Snapdragon has the best performance

Look across your suite of benchmarks, and the Snapdragon 8 Gen 3 model comes out ahead of the Exynos Galaxy S24 in every test. The margins aren’t huge; there’s just 7% between the two in Geekbench 6 and 5% in PCMark. Due to the phones ‘ similar CPU and memory capabilities, you’re unlikely to notice any performance difference for daily and heavy-duty workloads. Perhaps the most interesting result is that the Exynos’ ten-core setup is bested by Snapdragon’s bulkier eight-core CPU design.

However, the suite of 3DMark graphics tests show bigger wins for the Snapdragon 8 Gen 3, which outscores the Exynos 2400 by 25% in Wild Life and 18% in the more demanding Wild Life Extreme. The exception is Solar Bay, which uses ray tracing. The two models score virtually identically here, suggesting a ray tracing bottleneck in the Adreno architecture. Samsung’s use of AMD’s RDNA 3 Xclipse architecture pays off here.

Turning to stress tests, it’s clear that the limited cooling capabilities of the compact Galaxy S24 form factor don’t allow either of these GPUs to run at full tilt for long. Both throttle back roughly simultaneously, but the Snapdragon model falls furthest from its peak potential. It falls to just 47.9%, 54.2%, and 43.6% of its peak performance in each respective test. Exynos falls to 54.9%, 56.5%, and 60.3% after 20 runs. That said, the 8 Gen 3 still outperforms the Exynos chip in standard rasterization. The exception is ray tracing, where the AMD architecture pulls ahead as heat builds up.

Snapdragon is the stronger gaming chip, but Exynos wins at ray tracing under stress.

Interestingly, the Exynos chip has marginally higher temperatures across all three of these tests. It averages about 2°C warmer than the Snapdragon phone under stress. Thankfully, neither phone breaches the dreaded 50°C mark.

Exynos offers better battery life

While Snapdragon has the edge in performance, our testing suggests Exynos Galaxy S24 customers will see better battery life than their Snapdragon counterparts. This was also the case with previous-generation chipset comparisons, including the Snapdragon 865 vs Exynos 990 and Snapdragon 888 versus Exynos 2100.

It’s not a clear-cut win for Exynos in all our battery life tests. Camera capture time clocks in at about the same on both models, and the Snapdragon version lasted 16% longer at 4K30 video capture.

However, the Exynos Galaxy S24 pulls ahead with 17% longer 4K video playback and gaming benchmark times, and a 14% lead in our Zoom call test. The biggest win, however, is 36% longer battery life in our automated web browsing test. When it comes to the tasks you’re likely to do most often throughout the day, the Exynos Galaxy S24 seems to last a fair bit longer.

Exynos steams ahead with at least a 15% lead in most of our battery life tests.

Why is this the case? There are a few possibilities. First, the Exynos 2400 has two additional small CPU cores and lower clock speeds across all ten of its cores. Using the lower-clocked CPUs for lighter workloads, such as web browsing, should consume less power because the background work is distributed across lower-power cores. Similarly, it could be that Samsung Foundry’s third-generation 4nm (LPP+) manufacturing process is as or even slightly more efficient than the TSMC 4nm process used by the Snapdragon 8 Gen 3.

This could also factor into the longer video playback times we see with the Exynos model. It’s also possible that Samsung’s video decoder is more efficient, but we’re well into speculation here.

Of course, this test suite isn’t an exhaustive list of possible use cases, so your mileage will vary. Modem capabilities and the associated drain when roaming between towers will affect these results but are beyond the scope of what we can test. Likewise, background tasks and other options will drain the battery too, so this test is more of a rough guide than absolute battery life expectations. Still, the Exynos 2400 seems more frugal across most of our tests, pointing to longer battery life for global customers versus the Snapdragon model.

Snapdragon vs Exynos Galaxy S24, which is better?

Samsung Galaxy S24 Plus vs Samsung Galaxy S24 in hand

Galaxy S24 and S24 Plus
Credit: Robert Triggs / Android Authority

Looking at this year’s Galaxy S24 and S24 Plus models, the regional trade-off is marginally higher gaming performance for Snapdragon 8 Gen 3 customers, especially when it comes to non-ray-traced titles. However, Exynos users may end up with battery life for many common workloads. Everyone will have their preference and priority here (I’d take the extra battery life).

Ultimately, there’s not a lot to be done about this anyway. Whether your region ships Exynos or Snapdragon versions of the Galaxy S24 and S24 Plus is a lottery, and we’d hardly recommend the hassle of importing a different model. Afterall, the core Samsung Galaxy S24 experience is the same, regardless of which processor your phone features.

❌