Connect with us


Radar trends to watch: May 2022



April was the month for large language models. There was one announcement after another; most new models were larger than the previous ones, several claimed to be significantly more energy efficient. The largest (as far as we know) is Google’s GLAM, with 1.2 trillion parameters–but requiring significantly less energy to train than GPT-3. Chinchilla has ¼ as many parameters as GPT-3, but claims to outperform it. It’s not clear where the race to bigger and bigger models will end, or where it will lead us. The PaLM model claims to be able to reason about cause and effect (in addition to being more efficient than other large models); we don’t yet have thinking machines (and we may never), but we’re getting closer. It’s also good to see that energy efficiency has become part of the conversation.


  • Google has created GLAM a 1.2 trillion parameter model (7 times the size of GPT-3).  Training GLAM required 456 megawatt-hours,  ⅓ the energy of GPT-3. GLAM uses a Mixture-of-Experts (MoE) model, in which different subsets of the neural network are used, depending on the input.
  • Google has released a dataset of 3D-scanned household items.  This will be invaluable for anyone working on AI for virtual reality.
  • FOMO (Faster Objects, More Objects) is a machine learning model for object detection in real time that requires less than 200KB of memory. It’s part of the TinyML movement: machine learning for small embedded systems.
  • LAION (Large Scale Artificial Intelligence Open Network) is a non-profit, free, and open organization that is creating large models and making them available to the public. It’s what OpenAI was supposed to be. The first model is a set of image-text pairs for training models similar to DALL-E.
  • NVidia is using AI to automate the design of their latest GPU chips
  • Using AI to inspect sewer pipes is one example of an “unseen” AI application. It’s infrastructural, it doesn’t risk incorporating biases or significant ethical problems, and (if it works) it improves the quality of human life.
  • Large language models are generally based on text. Facebook is working on building a language model from spoken language, which is a much more difficult problem.
  • STEGO is a new algorithm for automatically labeling image data. It uses transformers to understand relationships between objects, allowing it to segment and label objects without human input.
  • A researcher has developed a model for predicting first impressions and stereotypes, based on a photograph.  They’re careful to say that this model could easily be used to fine-tune fakes for maximum impact, and that “first impressions” don’t actually say anything about a person.
  • A group building language models for the Maori people shows that AI for indigenous languages require different ways of thinking about artificial intelligence, data, and data rights.
  • A21 is a new company offering a large language model “as a service.” They allow customers to train custom versions of their model, and they claim to make humans and machines “thought partners.”
  • Researchers have found a method for reducing toxic text generated by language models. It sounds like a GAN (generative adversarial network), in which a model trained to produce toxic text “plays against” a model being trained to detect and reject toxicity.
  • More bad applications of AI: companies are using AI to monitor your mood during sales calls.  This questionable feature will soon be coming to Zoom.
  • Primer has developed a tool that uses AI to transcribe, translate, and analyze intercepted communications in the war between Russia and Ukraine.
  • Deep Mind claims that another new large language model, Chinchilla, outperforms GPT-3 and Gopher with roughly ¼th the number of parameters. It was trained on roughly 4 times as much data, but with fewer parameters, it requires less energy to train and fine-tune.
  • Data Reliability Engineering (DRE) borrows ideas from SRE and DevOps as a framework to provide higher-quality data for machine learning applications while reducing the manual labor required. It’s closely related to data-centric AI.
  • OpenAI’s DALL-E 2 is a new take on their system (DALL-E) for generating images from natural language descriptions. It is also capable of modifying existing artworks based on natural language descriptions of the modifications. OpenAI plans to open DALL-E 2 to the public, on terms similar to GPT-3.
  • Google’s new Pathways Language Model (PaLM) is more efficient, can understand concepts, and reason about cause and effect, in addition to being relatively energy-efficient. It’s another step forward towards AI that actually appears to think.
  • SandboxAQ is an Alphabet startup that is using AI to build technologies needed for a post-quantum world.  They’re not doing quantum computing as such, but solving problems such as protocols for post-quantum cryptography.
  • IBM has open sourced the Generative Toolkit for Scientific Discovery (GT4SD), which is a generative model designed to produce new ideas for scientific research, both in machine learning and in areas like biology and materials science.
  • Waymo (Alphabet’s self-driving car company) now offers driverless service in San Francisco.  San Francisco is a more challenging environment than Phoenix, where Waymo has offered driverless service since 2020. Participation is limited to members of their Trusted Tester program.


  • Mastodon, a decentralized social network, appears to be benefitting from Elon Musk’s takeover of Twitter.
  • Reputation and identity management for web3 is a significant problem: how do you verify identity and reputation without giving applications more information than they should have?  A startup called Ontology claims to have solved it.
  • A virtual art museum for NFTs is still under construction, but it exists, and you can visit it. It’s probably a better experience in VR.
  • 2022 promises to be an even bigger year for cryptocrime than 2021. Attacks are increasingly focused on decentralized finance (DeFi) platforms.
  • Could a web3 version of Wikipedia evade Russia’s demands that they remove “prohibited information”?  Or will it lead to a Wikipedia that’s distorted by economic incentives (like past attempts to build a blockchain-based encyclopedia)?
  • The Helium Network is a decentralized public wide area network using LoRaWAN that pays access point operators in cryptocurrency. The network has over 700,000 hotspots, and coverage in most of the world’s major metropolitan areas.


  • Do we really need another shell scripting language?  The developers of hush think we do.  Hush is based on Lua, and claims to make shell scripting more robust and maintainable.
  • Web Assembly is making inroads; here’s a list of startups using wasm for everything from client-side media editing to building serverless platforms, smart data pipelines, and other server-side infrastructure.
  • QR codes are awful. Are they less awful when they’re animated? It doesn’t sound like it should work, but playing games with the error correction built into the standard allows the construction of animated QR codes.
  • Build your own quantum computer (in simulation)?  The Qubit Game lets players “build” a quantum computer, starting with a single qubit.
  • One of Docker’s founders is developing a new product, Dagger, that will help developers manage DevOps pipelines.
  • Can applications use “ambient notifications” (like a breeze, a gentle tap, or a shift in shadows) rather than intrusive beeps and gongs?  Google has published Little Signals, six experiments with ambient notifications that includes code, electronics, and 3D models for hardware.
  • Lambda Function URLs automate the configuration of an API endpoint for single-function microservices on AWS. They make the process of mapping a URL to a serverless function simple.
  • GitHub has added a dependency review feature that inspects the consequences of a pull request and warns of vulnerabilities that were introduced by new dependencies.
  • Google has proposed Supply Chain Levels for Software Artifacts (SLSA) as a framework for  ensuring the integrity of the software supply chain.  It is a set of security guidelines that can be used to generate metadata; the metadata can be audited and tracked to ensure that software components have not been tampered with and have traceable provenance.
  • Harvard and the Linux Foundation have produced Census II, which lists thousands of the most popular open source libraries and attempts to rank their usage.


  • The REvil ransomware has returned (maybe). Although there’s a lot of speculation, it isn’t yet clear what this means or who is behind it. Nevertheless, they appear to be looking for business partners.
  • Attackers used stolen OAuth tokens to compromise GitHub and download data from a number of organizations, most notably npm.
  • The NSA, Department of Energy, and other federal agencies have discovered a new malware toolkit named “pipedream” that is designed to disable power infrastructure. It’s adaptable to other critical infrastructure systems. It doesn’t appear to have been used yet.
  • A Russian state-sponsored group known as Sandworm failed in an attempt to bring down the Ukraine’s power grid. They used new versions of Industroyer (for attacking industrial control systems) and Caddywiper (for cleaning up after the attack).
  • Re-use of IP addresses by a cloud provider can lead to “cloud squatting,” where an organization that is assigned a previously used IP address receives data intended for the previous addressee. Address assignment has become highly dynamic; DNS wasn’t designed for that.
  • Pete Warden wants to build a coalition of researchers that will discuss ways of verifying the privacy of devices that have cameras and microphones (not limited to phones).
  • Cyber warfare on the home front: The FBI remotely accessed devices at some US companies to remove Russian botnet malware. The malware targets WatchGuard firewalls and Asus routers. The Cyclops Blink botnet was developed by the Russia-sponsored Sandworm group.
  • Ransomware attacks have been seen that target Jupyter Notebooks on notebook servers where authentication has been disabled. There doesn’t appear to be a significant vulnerability in Jupyter itself; just don’t disable authentication!
  • By using a version of differential privacy on video feeds, surveillance cameras can provide a limited kind of privacy. Users can ask questions about the image, but can’t identify individuals. (Whether anyone wants a surveillance camera with privacy features is another question.)

Biology and Neuroscience

  • A brain-computer interface has allowed an ALS patient who was completely “locked in” to communicate with the outside world.  Communication is slow, but it goes well beyond simple yes/no requests.


  • CAT scans aren’t just for radiology. Lumafield has produced a table-sized CT-scan machine that can be used in small shops and offices, with the image analysis done in their cloud.
  • Boston Dynamics has a second robot on the market: Stretch, a box-handling robot designed to perform tasks like unloading trucks and shipping containers.
  • A startup claims it has the ability to put thousands of single-molecule biosensors on a silicon chip that can be mass-produced. They intend to have a commercial product by the end of 2022.


Learn faster. Dig deeper. See farther.

This Article was first live here.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published.


Apple’s Next Trick: Letting You Borrow Cables From Android Friends



A photo of a USB-C cable

Photo: Sam Rutherford / Gizmodo

It might not seem like the needle-moving announcement that Apple would make. But thanks to a news bit from a trusted analyst, there’s hope on the horizon that someday soon, Apple iPhones and Android smartphones will stop being separated—by charging cables, at least. (Don’t expect any parity on messaging any time soon!)

This week on Gadgettes, we dive into the most recent Apple leaks. With WWDC 2022 fast approaching, we figured it’d be an appropriate time to round up some of what we’ve heard in the rumor mill.

In addition to the USB-C tidbit, there’s chatter about everything from what the Apple Watch Series 8 will be capable of to whether iOS 16 will see much of a significant bump. We’ll also get into some of the patents revealed over the past few weeks, including a Surface Pro-like keyboard for the iPad and a foldable iPhone with a color E Ink display.

Then, Sony does it again, grooving into our hearts with its new WH-1000XM5 headphones. The model name doesn’t quite roll off the tongue, but you won’t care once you realize these are some of the best headphones money can buy. We’ll talk about why these headphones are worth their $400 price point. We’ll also get into the caveats of headphones like these and why the WH-1000XM5’s new folding mechanism might make you go for the last-generation model.

Finally, we’ll defend printers. We’ll explain why you might consider springing for an all-in-one printer for your at-home print shop. The compact HP Deskjet 6700 is an all-in-one that comes in a few colors and pairs rather nicely with the Amazon Basics laminator if you need to make reusable worksheets! HP also offers Instant Ink, which ships you ink cartridges so that you don’t have to worry about securing more when they run out.

Listen to this week’s episode of Gadgettes on Apple Podcasts, Spotify, or wherever you get your podcasts.

This Article was first live here.

Continue Reading


Chromebook 101: how to change your Chrome OS channels and get unreleased features



You might not know it from glancing at a Chromebook, but Google’s Chrome OS is in a constant state of evolution.

The operating system receives minor updates every two to three weeks and major releases every six weeks. And, at any given moment, Google’s staff is working on features and software enhancements that most people won’t see for a matter of weeks — or months.

Here’s a little secret, though: if you’re feeling adventurous, you can gain access to those unreleased enhancements. All it takes is the flip of a virtual switch in your Chromebook’s settings, and you’ll have all sorts of interesting new options at your fingertips.

First, it’s important to understand exactly what’s involved so you can make an educated decision about which setup makes the most sense for you.

Understanding the Chrome OS channels

Chrome OS actually exists in four separate development channels. The software you see on your Chromebook varies considerably depending on which channel you choose:

  • The Stable channel is the polished and ready for prime time version of the software that all devices use by default.
  • The Beta channel is updated weekly and receives new features about a month ahead of its Stable sibling.
  • The Developer channel is updated as frequently as twice a week and sees stuff that’s actively being worked on and has undergone only a small amount of testing.
  • Finally, the Canary channel is what Google describes as the “bleeding edge” Chrome OS path — a channel that receives daily updates prior to any widespread testing and can be accessed only by a Chromebook that’s switched into a special developer mode (which, somewhat confusingly, has nothing to do with the Developer channel).

The Stable channel is the safest option and what the vast majority of people should use — particularly those who need to know their computers will always work flawlessly without any hiccups or unexpected glitches.

If you’re feeling adventurous and don’t mind a bit of a risk, the Beta channel is a good way to get a peek at unreleased features without too much instability. The odds of running into something funky are certainly higher than with Stable, but, by and large, elements in Beta are fairly well-developed and just in the final phases of testing.

Most day-to-day users would be well advised to stay away from the Developer channel since it receives updates as they’re built and is quite likely to contain bugs. And, as for the Canary channel, if you’re not sure whether you ought to be using it, the answer is probably no.

Changing your Chrome OS channel

Once you’ve decided which channel you want to try, here’s how to make the switch:

  • Open your Chromebook’s settings.
  • Click About Chrome OS in the menu on the left, then click Additional details.

Click About Chrome OS in the menu on the left, then click Additional details.

  • Look for the category Channel and click the Change channel button. That’ll cause a pop-up to appear that lets you select the Stable, Beta, or Developer channel. (Canary, remember, is available only if your device is in Developer mode — a level of access that opens the door to more advanced forms of OS modification but also disables some of the software’s standard layers of protection. It requires several extra steps to enable and, again, isn’t advisable for most Chromebook users.)

Change channel menu

Choose the Stable, Beta, or Developer channel.

  • Click the channel you want, then click the blue Change channel confirmation button that appears.
  • Click the left-facing arrow at the top of the screen to get back to the About Chrome OS page. When you see the Restart button appear near the top of the page (it may take a minute or two), click it.

About Chrome OS page with Restart button

Hit the Restart button to complete the change.

And that’s it: as soon as your Chromebook finishes restarting, you’ll be on your new channel with all your accounts, files, and preferences in place just like you left them.

If you ever decide you want to move back to the Stable channel, repeat that same process and select Stable.

Change channel box with “Change channel and Powerwash” button.

If you change back to Stable, you’ll have to Powerwash your system.

Just note that moving in that direction — from a higher channel to a less experimental one — generally requires you to Powerwash your Chromebook. Powerwash means all of your information and data will be erased, and you’ll have to sign in anew and start over.

About ChromeOS box

Hit the Restart and reset button to finish the process of restoring the Stable channel.

The one exception: if your Chromebook is connected to a work- or school-based G Suite account, your data won’t be deleted and the change won’t take place immediately. Instead, you’ll have to wait until the lower channel catches up to the higher one in version number, which could take anywhere from a few weeks to a few months.

Update May 20th, 2022, 9:30AM ET: This article was originally published on October 15th, 2019, and has been updated to account for changes in the OS.

This Article was first live here.

Continue Reading


HP refreshes Spectre x360 laptop with Intel 12th-gen and Ryzen 5000 chips, Intel Arc GPU, beefed up webcam, and a quieter fan, starting at $1,650 (Scharon Harding/Ars Technica)



Scharon Harding / Ars Technica:

HP refreshes Spectre x360 laptop with Intel 12th-gen and Ryzen 5000 chips, Intel Arc GPU, beefed up webcam, and a quieter fan, starting at $1,650  —  HP Spectre laptops try out Intel discrete graphics, boosted webcams, new hues.  —  HP has revamped its Spectre x360 lineup of convertible …

This Article was first live here.

Continue Reading