FreshRSS

Zobrazení pro čtení

Jsou dostupné nové články, klikněte pro obnovení stránky.

How to run Meta’s Llama 3 on your PC

Od: Gary Sims

Meta, the company formerly known as Facebook, has recently unveiled Llama 3, the latest iteration of its large language model. This advanced model is available in two versions: an eight billion (8B) parameter version and a 70 billion (70B) parameter version. In this article, we will explore how to run the 8B parameter version of Llama 3 locally, a more feasible option for standard desktops or laptops that may struggle to run the larger 70B version.

Llama 3’s performance overview

Llama 3 is an impressive large language model. The 8B parameter version, trained using 1.3 million hours of GPU time, outperforms its predecessor, Llama 2, in several ways. For instance, it is 34% better than the 7 billion parameter version of Llama 2 and 14% better than the 13 billion parameter version. Remarkably, the 8B parameter version of Llama 3 even surpasses the performance of the 13 billion parameter version of Llama 2. It only falls short by 8% when compared to the 70B parameter version of Llama 2, making it an impressive model for its size.

Adding NVMe storage to your Raspberry Pi 5: What you should know

Od: Gary Sims

The Raspberry Pi 5 arrived with the potential promise of compatibility with NVMe solid state storage, specifically M.2 SSDs. However, this feature was not immediately available upon its release. The Pi 5’s access to a PCI connector made this possible, but official Raspberry Pi solutions are still unavailable. Nevertheless, third-party solutions have emerged. In this article, I explore the NVMe Base by Pimeroni, which allows you to connect an M.2 NVMe drive to your Pi 5 and boot from it, effectively saying goodbye to SD cards.

SD cards or NVMe drives?

flash memory ssd stock

Snapdragon vs Exynos vs Apple: Which chips have the best sustained performance?

Od: Gary Sims

When a new device hits the market, the initial focus often gravitates towards peak performance. Take, for instance, the recent launch of the Samsung Galaxy S24 range. Numerous videos, including mine, have discussed its peak performance, examining Geekbench and 3DMark scores. While these metrics are undoubtedly intriguing, considering sustained performance is equally important. How does the device fare after an extended period of use? In this article, I’ll be delving into the sustained performance of various flagship processors, including those from Samsung, Qualcomm, Apple, and more.

Why is sustained performance important?

There are several ways to evaluate the performance of a system-on-chip (SoC). Peak CPU and GPU performance provide insight into what a mobile SoC can achieve under optimal conditions. However, these conditions are fleeting. Therefore, examining sustained performance is beneficial, revealing what happens when the test is run multiple times. How do the thermals behave? Does the processor slow down? Is the peak performance somewhat deceptive, as it can only be maintained briefly? This approach provides a more accurate reflection of performance during prolonged use, especially during gaming sessions.

Machine learning vs AI vs deep learning: The differences explained

Od: Gary Sims

In today’s digital age, terms like machine learning, deep learning, and AI are often used interchangeably, leading to a common misconception that they all mean the same thing. However, these terms have distinct technical differences that are important to understand. This article aims to explore these terms in detail, but feel free to check out the video above as well.

What is machine learning and deep learning?

Machine learning is a subfield of computer science that emphasizes the development of algorithms and statistical models. These models enable computers to perform tasks without explicit instructions, relying instead on patterns and inference. Unlike traditional computer programs where you specify the steps, machine learning presents examples from which the system learns, deciphering the relationship between different elements in the example.

Machine learning is a subfield of computer science that emphasizes the development of algorithms and statistical models.

Machine learning involves two distinct phases: training and inference. A computer algorithm analyzes many samples or training data to extract relevant features and patterns during the training stage. This data can include numbers, text, images, speech, and videos. The models analyze the data, identify different features in the dataset, and learn to distinguish one thing from another.

There are different methods of conducting the training stage. The first one, supervised learning, involves learning that explicitly maps the input to the output. Other types of training include unsupervised learning, where the patterns are not labeled, and reinforcement learning.

Inference, the second stage, is the output stage. Here, the model, drawing from everything it learned, is queried about something not included in the training data.

ChatGPT Android app play store

Credit: Calvin Wankhede / Android Authority

Numerous models can be used, and not all are neural networks. However, neural networks, which mimic how the neurons in the brain work, are pretty popular today. These digital neurons are arranged in layers, each having weights and biases. The network adjusts these weights and biases during the learning phase to produce the correct answer.

Deep learning relates to neural networks, with the term 'deep' referring to the number of layers inside the network.

There are various types of neural networks beyond classic examples, including convolutional neural networks, recurrent neural networks (RNNs) like long short-term memory networks (LSTMs), and more recently, transformer networks. Deep learning relates to neural networks, with the term “deep” referring to the number of layers inside the network.

How does AI differ from machine learning?

Samsung Galaxy S24 Ultra Generative AI Wallpapers

Credit: C. Scott Brown / Android Authority

Many machine learning systems we use daily, such as face detection, speech recognition, object detection, and more, are all types of machine learning, not AI. However, due to marketing strategies, these are often labeled as AI. AI, which originally referred to human-like intelligence in machines, now refers to any aspect of technology that partially shares attributes with human intelligence. In this sense, AI is very narrow and is essentially machine learning.

The Turing Test, a game where three people communicate via text messages, has been made obsolete by Language Models (LLMs) as they can imitate without thinking, thus invalidating the imitation game to answer the original question, “Can machines think?” This leads us to Artificial General Intelligence (AGI), a term used to describe a type of artificial intelligence that is as versatile and capable as a human. AGI is currently a theoretical idea with no existing systems. To be considered AGI, a system must learn and apply its intelligence to various problems, even those it hasn’t encountered before. A true human AGI would need to possess consciousness and self-awareness.

What is sudo for Windows, and how do you use it?

Od: Gary Sims

If you’ve ever interacted with the Linux command line, you’re likely familiar with the sudo command, short for “super user do.” This command allows you to execute actions with elevated privileges, essentially running it as root. This is necessary for system administration tasks inaccessible to regular users without the required privileges. As an administrator, you can configure sudo to be accessible only to you, preventing others from using it.

Excitingly, Microsoft has recently announced the introduction of sudo for Windows. This isn’t a third-party addition; it’s built directly into the Windows operating system. At the time of writing, you’ll need an insider build starting with build 26052 to access this feature. Over time, this will gradually roll out to the beta channel and eventually to mainstream users. The version of Windows you’ll need to enable this feature will depend on when you’re reading this article.

But what does sudo for Windows allow? In this post, I’ll guide you through enabling sudo and using it to run commands with elevated privileges. Also feel free to check out the video on the topic above.

How to enable sudo for Windows

sudo for windows 2

Credit: Microsoft

Sudo for Windows isn’t operational by default. Here’s how to enable sudo for Windows:

  1. Navigate to the Start Menu.
  2. Select Settings.
  3. Select System.
  4. Scroll down to For developers.
  5. Find the option to Enable sudo and toggle it on.

Be aware that running the sudo command could potentially expose your device and personal data to security risks.

Once sudo is enabled, you must open a PowerShell window using the Windows Terminal. Now, if you entersudo dir, this will execute the dir command with elevated privileges.

How to adjust sudo for Windows configuration

sudo for windows 1

Credit: Microsoft

By default, when you run sudo, it launches in a separate window. To change this, return to the Settings where you enabled sudo and adjust the configuration under Configure how sudo runs applications in a new window. There are three options available. The first means there’s no input coming into it, and the second runs it inline, which I prefer. Once you’ve adjusted this setting to inline, you can return to the terminal and try again. Now, when you run sudo dir, it will execute within the window, similar to the experience on Linux.

Interestingly, you can also adjust this configuration using the command line. To force a new window, you’ll need to enter sudo config --enable force New Window. It’s essential that “New Window” is capitalized. To execute this command, you must be an administrator. You’ll need to enter sudo before the command, making it sudo sudo. To revert to the inline setting, simply type sudo sudo normal.

What can sudo for Windows do?

Let’s explore a couple of examples. Using sudo for Windows, you can now run applications with elevated privileges. For instance, if you want to run Notepad with these privileges, you can do so. This can be particularly useful when you need to edit a configuration file that requires administrator rights.

Another practical example is creating a directory in a location that requires elevated privileges. For instance, if you navigate to cd\ cd "program files" and attempt to create a directory named zob, you’ll be denied access. However, if you enter sudo mkdir zob you’ll be able to create the directory. The same applies to removing the directory with rmdir zob. Any task requiring administrative rights or elevated privileges can now be executed from the command line using sudo.

What is x86-64-v3? Understanding the x86-64 microarchitecture levels

Od: Gary Sims

The term x86-64v3 is once again a discussion point for Linux users, sparking curiosity and questions about its relevance to the platform. But what is it, why does it matter to Linux, and what is all the fuss about? Find out everything you need to know about x86-64v3 below.

A brief overview of microarchitecture history

The story of the x86 instruction set began about 39 years ago with the introduction of the Intel 80386, commonly referred to as the 386. This was a pivotal moment in the history of modern desktop and server computing. Launched in 1985, the 386 was Intel’s first 32-bit processor and was equipped with a full memory management unit, enabling it to run operating systems that utilize virtual memory. However, the evolution of x86 technology didn’t stop at the 386. Over time, this older chip, its microarchitecture, and its instructions were phased out. Debian Linux discontinued 386 support in 2005 and completely removed it in 2007. The Linux kernel followed suit in 2012, despite Linux’s original development on 386 and 486 machines.

The 586, the 686, and so forth followed their predecessors, which were later named the Pentium, Pentium II, and so on. Each new version introduced additional instructions to the x86 instruction set and new extensions like MMX and SSE. Eventually, common operating systems began phasing out support for 32-bit x86 entirely, ushering in the 64-bit era. For instance, Windows 11 is exclusively 64-bit, Ubuntu ceased supporting 32-bit PCs in 2018, and macOS transitioned to fully 64-bit in 2011.

x86-64v3 essentially adds AVX2, MOVBE, FMA, and some additional bit manipulation instructions.

However, the advent of the 64-bit era didn’t signify the end of progress for x86. The first milestone here was the introduction of the baseline AMD64 x86-64 instruction set with MMX, SSE, and SSE2. This was essentially used by the 2003 AMD K8 processor family and ensured compatibility with the first EMT64T — Intel’s 64-bit processors. In short, AMD’s processors were the first to be released in 64-bit, and they are the ones that defined the x86-64-bit architecture.

The next evolution of this baseline instruction set is referred to as x86-64v2, which includes SSE3, SSE4.1, and SSE4.2. This corresponds to the mainline CPUs from around 2008 to 2011, such as the AMD Bulldozer and Intel Nehalem. As of May 2023, Red Hat Enterprise Linux 9 discontinued support for older baseline x86-64-bit processors, requiring support from x86-64v2 processors instead. openSUSE Tumbleweed also began transitioning to require v2 processors in late 2023. SUSE Linux Enterprise Server recommends v2 for optimal performance and may discontinue support for older processors in the future.

What is x86-64v3?

intel evo logo huawei matebook x pro 2022 12th gen

Credit: Oliver Cragg / Android Authority

x86-64v3 essentially adds vector instructions up to AVX2, MOVBE, FMA, and some additional bit manipulation instructions. AVX2, also known as the Haswell instructions, expands to the original AVX instruction set introduced in Intel’s Haswell microarchitecture. The MOVBE instruction allows for quick conversion to little-endian and big-endian in hardware by swapping bytes on a read from memory or on a write to memory. The FMA (fused multiply-add) instruction combines multiplication and addition into a single operation that computes the intermediate result with finite precision. This is particularly useful for gaming, matrix operations, and neural network applications.

x86v3 was first implemented in the first Intel Haswell generation CPUs in 2013, and AMD implemented it in 2015 with the Excavator microarchitecture. However, Intel’s Atom product line only added v3 support with the Gracemont microarchitecture in 2021. Despite this, Intel continued to release Atom CPUs without AVX or AVX2, including the Parker Ridge line in 2022 and some Elkhart Lake variants in 2023.

This is why v3 has been slow in becoming the new baseline, as not all Intel processors support it, making its mandate problematic. However, these are Atom processors, not server processors. Therefore, v3 support is not guaranteed and needs to be checked for each specific processor. It’s also worth mentioning that there is an x86-64v4 introduced with Intel Skylake and AMD Zen 4 platforms, which adds AVX-512.

To test whether your CPU has x86-64, use the x86-64-level tool available on GitHub or the ld-linux command on Ubuntu on other distros. Here’s an example string for Ubuntu:

/usr/lib64/ld-linux-x86-64.so.2 --help

This will provide a listing and indicate whether it supports v2, v3, and v4.

Why is everyone talking about x86-64v3?

Credit: Edgar Cervantes / Android Authority

The buzz is primarily due to Red Hat Linux Enterprise Linux 10 moving to the v3 baseline. Gentoo is now offering v3 packages, and there are experimental builds of Ubuntu Server using v3. This essentially means that when they compile it, they use the right compiler flags to ensure that AVX2 can be used when necessary but this requires hardware support. Different Linux distributions will roll this support out at different stages. For instance, NixOS is transitioning to v2 in 2024 and subsequently to v3 by 2027.

Comparisons of Linux distros compiled with baseline or v2 to those with v3 show varying performance results. In some cases, there is a performance boost, while in others, there is a performance decline. This is a maturing technology regarding the compiler flags and the code the compiler produces for different use cases.

Just because a distro requires a certain microarchitecture level, doesn't mean that Linux itself demands it.

However, it’s important to note that just because a distro requires a certain microarchitecture level, it doesn’t mean that Linux itself demands it. For example, 32-bit Linux distributions are still available today. Therefore, you can always find a version of a Linux distribution that suits your specific hardware, especially if you have an older PC. The focus here is on what the leading edge and most popular distros are doing. Of course, if the Linux kernel dropped support for a certain microarchitecture level as it did with the 386, that would be a different matter. However, we’re not at that stage yet.

Lastly, this pertains only to 64-bit x86 desktop processors from Intel or AMD. It does not apply to, for example, Arm processors that you might find in a Raspberry Pi or in your smartphone. Moreover, Windows 11 has already mandated the use of modern CPUs, requiring an eighth-generation Intel Core or Zen 2 to run Windows 11 officially.

What is sudo for Windows, and how do you use it?

Od: Gary Sims

If you’ve ever interacted with the Linux command line, you’re likely familiar with the sudo command, short for “super user do.” This command allows you to execute actions with elevated privileges, essentially running it as root. This is necessary for system administration tasks inaccessible to regular users without the required privileges. As an administrator, you can configure sudo to be accessible only to you, preventing others from using it.

Excitingly, Microsoft has recently announced the introduction of sudo for Windows. This isn’t a third-party addition; it’s built directly into the Windows operating system. At the time of writing, you’ll need an insider build starting with build 26052 to access this feature. Over time, this will gradually roll out to the beta channel and eventually to mainstream users. The version of Windows you’ll need to enable this feature will depend on when you’re reading this article.

What is x86-64-v3? Understanding the x86-64 microarchitecture levels

Od: Gary Sims

The term x86-64v3 is once again a discussion point for Linux users, sparking curiosity and questions about its relevance to the platform. But what is it, why does it matter to Linux, and what is all the fuss about? Find out everything you need to know about x86-64v3 below.

A brief overview of microarchitecture history

The story of the x86 instruction set began about 39 years ago with the introduction of the Intel 80386, commonly referred to as the 386. This was a pivotal moment in the history of modern desktop and server computing. Launched in 1985, the 386 was Intel’s first 32-bit processor and was equipped with a full memory management unit, enabling it to run operating systems that utilize virtual memory. However, the evolution of x86 technology didn’t stop at the 386. Over time, this older chip, its microarchitecture, and its instructions were phased out. Debian Linux discontinued 386 support in 2005 and completely removed it in 2007. The Linux kernel followed suit in 2012, despite Linux’s original development on 386 and 486 machines.

❌