Items tagged with machine learning

Thanks to a new deep neural network called ExoMiner, NASA has added a whopping 301 exoplanets that human eyes had missed. These exoplanets have been added to the 4,569 other planets that have been validated by NASA. NASA scientists, astronomers and researchers are always on the lookout for new exoplanets that may exist. Up until now the job of finding them has been placed mostly on the eyes of those looking. But now NASA has implemented a new way of searching for these exoplanets and it involves using AI and machine learning methods that automatically learn and task when provided with enough data.  ExoMiner takes advantage of NASA's Pleiades supercomputer. It can tell the difference between... Read more...
Meta has not made it a secret it intends to expand its metaverse to include new projects and products. It seems Mark Zuckerberg wants one of those new ventures to include helping robots experience and understand the world as flesh and blood humans do, through touch interaction. Or maybe Zuck pines to be human. Either way, this is some interesting research. Facebook and its new parent company Meta have been making headlines as of late as it continues to try and recover from the backlash incurred from the leaked internal documents from a former employee. As Meta moves forward into creating Zuckerberg's vision of the metaverse, it is looking at new ways it can advance technology and people's minds... Read more...
AMD is rolling out a new graphics product based on its second-generation Radeon DNA (RDNA 2) architecture, though it's not for home PCs and consumers. Instead, the Radeon Pro V620 is a multi-purpose GPU accelerator for cloud workloads. That includes doling out "immersive AAA game experiences" through cloud gaming. If there's a silver lining to the GPU shortage as a whole, it's that companies are intensifying their efforts to bolster cloud gaming services, which remove much of the core hardware requirements from the user's end. Low-powered laptops and even smartphones can tap into some of these services for a high-end gaming experience. "We’re seeing adoption of gaming in the cloud from... Read more...
As artificial intelligence and machine learning capabilities continue to play increasingly larger roles in everyday tasks, there is a need for ever-faster hardware and architectures. Arm is on the task. During its DevSummit conference this week, Arm's senior director of technology, Ian Bratt, talked a bit about what's in store for the company's next-gen GPU architecture. His 20-minute keynote was largely focused on AI and ML technologies, including things like the human plasticity curve and other buzz phrases that are not likely to be widely recognized by the general public. They are, however, important for how consumer devices operate and what they are capable of doing. Think about your smart... Read more...
SK Hynix is on cloud nine today on claims it has developed the first-ever High Bandwidth Memory 3 (HBM3) DRAM solution, beating other memory makers to the punch. According to SK Hynix, HBM3 is the world's best-performing DRAM, with the ability to process 819 gigabytes per second for a delightful performance bump over previous iterations. Speaking of which, HBM3 is technically a fourth-generation implementation of HBM, with the previous three in ascending order being HBM, HBM2, and HBM2E. That latter one is an update to the HBM2 specification, with more bandwidth and capacity on tap—SK Hynix introduced its first HBM2E product in August 2019, with 460GB/s of bandwidth, and began mass producing... Read more...
Google today launched its newest flagship Pixel phones, and we've already posted a Pixel 6 and Pixel 6 Pro guided tour with a handful of photos, a hands-on video, and loads of specifications and features to digest. During today's event, Google also shared some more details about Tensor, and what it aims to achieve. Benchmark wins are not necessarily one of the goals, though this is still an interesting chunk of silicon. For its first-ever mobile chip designed in-house, Google is hanging its hat on machine learning/AI capabilities. Enabling new experiences was purportedly the main reason why Google embarked down this road, rather than tapping a third-party, as it has always done in the past. "We... Read more...
With the proliferation of artificial intelligence in recent years, the term "neuromorphic" is being used much more often in the tech sector. If you're a native English speaker, you can probably surmise that neuromorphic means something along the lines of "brain-like." Indeed, the buzzword of the day is "neuromorphic processing," and it refers to computers—previously called "cognitive computers"—designed to mimic the function of the human brain. The reason that's the buzzword of the day is because Intel just announced its second-generation neuromorphic processor, Loihi 2. If you never heard of the original Loihi, you probably aren't involved in bleeding-edge artificial intelligence... Read more...
It is well established that facial recognition based on machine learning is not perfect by any stretch of the imagination; therefore, using it for security purposes is likely a bad idea. This has now been proven through research from Ben-Gurion University of the Negev, which showed that digital and real makeup could trick facial recognition systems with a success rate of up to 98%. The researchers at Ben-Gurion University explained that facial recognition is widely used in subways, airports, and workplaces to automatically identify individuals. In this experiment, the ArcFace face recognition model was used with 20 blacklisted participants who would be flagged in a real-world facial... Read more...
While robots are cool, the jury is still out on whether they will want to kill us pesky humans if the robot uprising occurs. This is not helped by the fact that we are making more robust robots that can walk and move as well as if not better than humans. Enter Cassie: a bipedal legs-only robot devised at Oregon State University that just traversed five kilometers in under an hour. Since 2017, students at OSU have worked on Cassie under the direction of robotics professor Jonathan Hurst, using a 16-month $1 million grant from the Advanced Research Projects Agency of the U.S. Department of Defense. Hurst, who founded spin-off company Agility Robotics, explained that “The Dynamic Robotics... Read more...
Games are one of the best testbeds for AI as they require problem-solving, forward-thinking, and other skills that normally only humans possess. So far, AI has gotten a handle on games like Go and up to 57 different Atari 2600 titles, but those are not terribly difficult. What if AI was pitted against one of the most notoriously hard games of all time? As it turns out, that is exactly what Facebook wants to do to bring down NetHack. NetHack is an 80s text-based single-player dungeon exploration game with a focus on “discovering the detail of the dungeon and not simply killing everything in sight,” as the NetHack website explains. If you decide to kill everything in sight, or at least... Read more...
Do you remember discussing (or arguing) with your schoolteacher about how you would never use some of the lessons that they they taught? There were probably some things that you never used and simply forgot about, but what if you could never forget such extraneous information? As it turns out, AI and machine learning have this exact problem, but Facebook AI researchers are looking to tackle it by teaching AI to forget things. Typically, AI is rather good at various tasks, but when it comes to searching long-term memories, performance drops, and the cost of storage grows exponentially. This can be quite the problem as time goes on since we constantly take in new information that would need to... Read more...
Some people are concerned about games like GTA 5 affecting peoples' behavior in real life, but what if those games actually looked like real life? Researchers at Intel Labs may have figured out how to do just that using machine learning to make rendered footage look photorealistic. This technology could just bring gaming into a new era if it is used in the wild. Since the dawn of video games, people have been trying to make them as realistic as possible to achieve the most immersive experience. PC and console hardware has grown over time as well, which nicely complements this goal. Take, for example, the Tomb Raider games, the original of which looks to us more like a Cubism art piece versus... Read more...
Last year's Overwatch League was canceled due to the COVID-19 pandemic, and that temporarily pumped the brakes on some interesting AI projects, as well. At the time, Blizzard and IBM were set to announce that Big Blue was about to become the cloud, AI, and machine learning analytics partner for the game's official e-sports platform. Blizzard and IBM hope that this partnership will lead to figuring out new ways to evaluate performance, fix balance issues, and much more. Now that Overwatch League is back on for 2021, the partnership is starting to bear fruit. IBM has developed an AI-powered ranking system that will not only separate the wheat from the chaff, but hopefully also bring the best of... Read more...
Arm processor architecture helps make the world go-'round, as chips using the instruction set and core architecture reside in various devices from smartphones/tablets to automotive applications, smart TVs and appliances, and networking equipment. Most tech enthusiasts, however, are primarily familiar with Arm architecture in consumer devices like Android smartphones (typically powered by Qualcomm-designed Snapdragon Arm-based SoCs) and Apple devices (iPhones and the new M1 Macs), which typically utilize Arm core architecture of various derivatives. For the past decade, these devices have been employing variants of the Armv8 architecture, the first native 64-bit Arm instruction set (starting... Read more...
Deepfakes have been around for a few years now, and with each passing day, it seems as though the technology keeps getting better and more lifelike.  We saw this last year with a Back To The Future deepfake starring Robert Downey Jr. and Tom Holland, rather than real Doc and Marty. Now, deepfaker and visual effects artist Chris Ume has taken Tom Cruise’s face and put it on an actor to make a deepfake TikTok series, and it is wild. For those who do not know, deepfakes are AI-generated images or video which can be made to look like a specific person. The way they work, in essence, is by training AI with millions of images of the targeted person. The trained AI is then paired with an... Read more...
Recently, Microsoft patented the creation of an AI chatbot for a specific person, whether they were alive or not. Now, a genealogy company called MyHeritage has partnered with deep learning and image processing company D-ID to create something called “Deep Nostalgia.” This technology can bring a person's ancestors back to life by running a process that upscales, sharpens, and animates any image uploaded. Announced at RootsTech Connect 2021, the world’s largest genealogy conference, “Deep Nostalgia” is a licensed technology that uses D-ID’s AI Face Platform in the backend. Essentially, MyHeritage filmed people for a basic set of motions which could then be paired... Read more...
In 2017, Microsoft filed a patent entitled “Creating a Conversational Chat Bot Of A Specific Person,” which was recently finally approved by the USPTO on December 1st, 2020. Over 21 pages of material are provided in the patent, covering the software, hardware, and other minutiae that come up with this idea. However, the basic premise is that Microsoft will scan and scrape "social data" from the internet about you, another person or perhaps even a loved one that has passed away. Then, the company will use that data to create a chatbot with the personality of the person being targeted, for a sort of persona replication, using machine learning and AI. This chatbot can then be paired... Read more...
Imagine a world where you could control a computer with your mind. Researchers at the University of California, San Francisco Weill Institute for Neurosciences have created a brain-computer interface (BCI) that is reliant on machine learning. Paralyzed individuals were able to control a computer cursor with their brain activity without the need for constant retraining.  Past brain-computer interfaces (BCIs) “...used ‘pin-cushion’ style arrays of sharp electrodes” that would penetrate the brain. These arrays would allow for more “sensitive recordings”, but they were relatively short-lived. Researchers would need to reset and recalibrate their systems every... Read more...
There are currently several AI tools whose purpose is to create more life-like images. The most recent addition is PULSE (Photo Upsampling via Latent Space Exploration), an AI-infused application that can transform low-resolution pixelated images into high-resolution images. The creators of PULSE’s mainly focused on human faces, but others have already used the tool to create slightly terrifying images of video game characters that may just haunt your dreams. PULSE was developed by a team at Duke University in Durham, North Carolina. It varies from other tools because “instead of starting with the LR (low resolution) image and slowly adding detail, PULSE traverses the high-resolution... Read more...
Machine Learning and Artificial Intelligence are hot topics in virtually all tech sectors at this moment in time. The technology is gaining traction in many industries, from massive social networks that are analyzing mountains of data, to the tiniest of IoT smart home and mobile devices. Even during our everyday workflow here at HotHardware, we currently utilize a few key AI-enabled applications to enhance our content. Further, if you’ve ever talked to a smartphone for speech-to-text messaging or for Google Assistant recommendations, AI has impacted your connected experiences as well. For those that may be unfamiliar with how machine learning and AI technologies work, we should probably... Read more...
One of the first arcade games you may remember playing on an Atari 2600, what seems like a million years ago, was Pac-Man. However, this was not exactly a faithful recreation of the true classic co-op arcade version. Now, several decades later, NVIDIA trained an AI model called GameGAN to recreate a proper recreation, and what makes this so interesting is that there is no underlying game engine, whatsoever. These days, the original Pac-Man arcade game is easy enough to recreate on virtually any modern platform, though this approach with NVIDIA machine learning is unique and far more challenging. GameGAN did not have the benefit of being programmed with the game's fundamental rules or... Read more...
Researchers at NVIDIA have come up with a clever machine learning technique for taking 2D images and fleshing them out into 3D models. Normally this happens in reverse—these days, it's not all that difficult to take a 3D model and flatten it into a 2D image. But to create a 3D model without feeding a system 3D data is far more challenging. "In traditional computer graphics, a pipeline renders a 3D model to a 2D screen. But there’s information to be gained from doing the opposite—a model that could infer a 3D object from a 2D image would be able to perform better object tracking, for example.," NVIDIA explains. What the researchers came up with is a rendering framework called... Read more...
1 2 3 4 5 Next