Netflix of Video Games, the start of Cloud-Gaming

By Gokul Siddharthan J, DCMME Graduate Student Assistant

Cloud Gaming

The ability to stream songs and movies over the internet has transformed how we watch movies and listen to songs over the past decade, but the $140bn market for video games hasn’t yet entered the cloud subscription services such as Netflix, Hulu, and others. Recently, Google began tests of a cloud gaming service called “Project Stream”, using a big-budget game, “Assassin’s Creed Odyssey”. The game is computationally heavy and usually runs on consoles and high-end computer systems. But with the computational heavy lifting transferred to Google’s data centres, even a modest laptop can run the game.

Microsoft is due to start testing a similar service called “Project xCloud”. Electronic Arts, a big gaming company with famous titles such as FIFA, has plans for a streaming product of its own. Nvidia, a maker of graphics cards and chips, is testing a similar service. Sony already has a cloud-gaming service called “PlayStation Now”. There are also a few startups in the fray.

The mechanics of cloud gaming involves the game being run on a data centre hundreds of miles away, and the feed relayed to the user. The success of the cloud-gaming services rely on the infrastructure. The computer running the game must react instantly to the user’s input or the game will feel sluggish. If the latency, time taken for a round trip of data from the data centre to the player’s computer, takes more than couple of dozen milliseconds then the user experience will start breaking down, especially when playing high end action games. Connections must be rock solid.

Earlier attempts at cloud-gaming resulted in failed attempts due to an insufficient network infrastructure. But nowadays, many homes are connected to high-speed broadband connections. Firms such as Google and Amazon have data centres present all over the world, and they have the technical expertise to establish such a service. Incumbents such as Microsoft and Sony face a threat from these new entrants. But it’s still too early to predict who will win the battle.

Cloud-gaming appeals for other reasons too. The gaming industry is increasingly making money from users buying digital goods in a game. The marginal cost of generating such digital goods is almost zero, so every sale is a profit. Often the margins on consoles aren’t very profitable so the business model in the gaming industry will see some changes.

People are trained to expect entertainment to be portable, transferable between different devices, and instantly available. The hope is that cloud-gaming will be appealing to consumers, and the industry will have to simply be keeping up to their habits.

 

Machine Learning

By Gokul Siddharthan J, DCMME Graduate Student Assistant

Machine Learning

Machine learning and Artificial Intelligence are part of the same family. In fact, machine learning is a branch of AI-based computer systems that can learn from data, identify patterns, and make decisions without human intervention. When exposed to new data, computer systems can learn, grow, change and develop themselves.

Machine learning and AI is everywhere. There is a high possibility you are using it and don’t even know about it. Some of the instances where machine learning is applied are Google’s self-driving car, fraud detection, online recommendations such as in Amazon, Facebook, Google Ads, Netflix recommendations, and many more. Traditional data analysis was done by trial and error, but this approach isn’t feasible when data becomes large and heterogeneous. Machine learning proposes smart alternatives to analyzing huge volumes of data through fast and efficient algorithms and analysis of real-time data. It is able to produce accurate results and analysis. Other major uses of machine learning are in virtual personal assistants, such as Alexa, Google Home, Siri, in online customer support, where chatbots interact with the customer to present information from the website, in predictions while commuting, Traffic predictions like Google Maps and online transportation networks like Uber, in social media services, such as Facebook’s people you may know, face recognition of uploaded photos. These are a few examples, but there are numerous uses where machine learning has been proving its potential.

So how do machines learn? There are two popular methods, supervised learning and unsupervised learning. About 70 per cent of machine learning is supervised, while unsupervised is around 10-20 per cent. Other less often used methods are semi-supervised and reinforcement learnings. In supervised learning, inputs and outputs are clearly identified, and algorithms are trained using labelled examples. The algorithm receives inputs along with a correct output to find errors. Supervised learning is used in applications where data predict future events, such as fraudulent credit card transactions. Unlike supervised learning, unsupervised learning uses data sets without historical data. It explores surpassed data to find the structure. It works best for transactional data, i.e. in identifying customer segments with certain attributes. Other areas where unsupervised learning is used are online recommendations, identifying data outliers, self-organizing maps.

Google’s chief economist Hal Varian adds, “just as mass production changed the way products were assembled, and continuous improvement changed how manufacturing was done, so continuous experimentation will improve the way we optimize business processes in our organizations.” It’s clear that machine learning is here to stay.

Sources:

https://www.simplilearn.com/what-is-machine-learning-and-why-it-matters-article

Olympic Games with Robots in Tokyo 2020

by Maria Hartas, DCMME Graduate Assistant

Robots will be showcased in one of the world’s most viewed sporting events by millions of people internationally at the Olympic and Paralympic Games in Tokyo 2020.

The Tokyo 2020 Robot Project includes power assisted suits, robots, that will be showcasing real-life applications of the technology. Robotic assistance will be ubiquitous at the events; assisting wheelchair users, carrying food and drinks, providing event information, handling waste disposal, and even coming close to star athletes loading athlete luggage onto buses.

In relatively recent years, robots have been a part of other sporting events. For example, at Rio 2016, robotic cameras captured still images and were displayed at the Paralympic opening ceremony. Similarly, information is captured by robotic sensors in NHL ice hockey games.

Robotic applications will be displayed to millions of viewers, making the Tokyo 2020 Olympics one of the first international, large-scale events to incorporate and showcase new technologies.

From grocery stores to manufacturing, we can now find robots at the Olympic games.

How will robots enhance the Tokyo 2020 Olympic games?

Do robots have real-life applications?

Where can robotic technology be applied?

Source: https://www.bbc.com/sport/olympics/47635649

What is Quantum Computing?

By Gokul Siddharthan J, DCMME Graduate Student Assistant

The birth of quantum physics was in the early 20th century, with renowned scientist such as Albert Einstein, Werner Heisenberg making significant contributions in the field. But quantum computing as a discipline emerged only in the 1970s and 1980s. In the 1990s, algorithms were processed faster in quantum computing, leading to an increased interest in the field. Additional discoveries eventually led to a better understanding of how to build real systems that could implement quantum algorithms and correct for errors.

Quantum Computing

We see the benefits of classical computing in our everyday lives. Most of the applications and devices that are ubiquitous in the world today are run on classical computing principles. However, there are limitations that today’s systems will never be able to solve. For challenges above a certain scale and complexity, there isn’t enough computational power on Earth. To stand a chance to solve these complex problems, we need a new kind of computing system that scales exponentially as the complexity grows.

Quantum computing is different from classical computing at a fundamental level. In classical computing, information is processed and stored in bits, 0s and 1s. Millions of bits work together to create the results you see every day. In quantum computing, different physical phenomena are used to manipulate information. These phenomena are superposition, entanglement, and interference. To accomplish this, we rely on different physical devices, quantum bits, or qubits. A qubit is a counterpart to the bit in classical computing. Just as a bit is the basic unit of information in a classical computer, a qubit is the basic unit of information in a quantum computer.

So how is information stored by qubits? A number of elemental particles such as electrons and photons can be used, with either their charge or polarization act as a representation of 0s and 1s. Each of these particles is known as qubits. The nature and behaviour of these particles form the basis of quantum computing. The two most relevant and popular aspects of quantum physics are superposition and entanglement. Superposition is the term used to describe the quantum state where particles can exist in multiple states at the same time and allow quantum computers to look at many different variables at the same time.

Qubit

The power of quantum computing is unimaginable. A quantum computer comprised of 500 qubits has the potential to do 2^500 calculations in a single step. 2^500 is infinitely more atoms than there are in the known universe. This is true parallel processing. Classical computers today that have so-called parallel processors, still only truly do one thing at a time. There are just two or more of them doing it. Classical computers are better at some tasks than quantum computers such as email, spreadsheets, desktop publishing, etc. The intent of quantum computers is to be a different tool to solve different problems, not to replace classical computers.

Quantum computers are great for solving optimization problems from figuring out the best way to schedule flights at an airport to determine the best delivery routes for the FedEx truck. Google announced it has a quantum computer that is 100 million times faster than any classical computer in its lab. Every day, we produce 2.5 exabytes of data. This is equivalent to the content on 5 million laptops. Quantum computers will make it possible to process the amount of data we’re generating in the age of big data. Rather than use more electricity, quantum computers will reduce power consumption anywhere from 100 to 1000 times because quantum computers use quantum tunnelling. IBM’s computer Deep Blue defeated chess champion Garry Kasparov in 1997. It was able to gain a competitive advantage because it examined 200 million possible moves each second. A quantum machine would be able to calculate 1 trillion moves per second. Google has stated publicly that it will make a viable quantum computer in the next 5 years by launching a 50-qubit quantum computer. Top supercomputers can still manage everything a 5-20 qubit quantum computer can but will be surpassed by a machine with 50 qubits.

Though a viable and true quantum computer is still not a reality, the race is on with many companies offering quantum machines. Quantum computing is no longer in the distant future.

Sources:

https://whatis.techtarget.com/definition/qubit

https://www.research.ibm.com/ibm-q/learn/what-is-quantum-computing/

https://www.forbes.com/sites/bernardmarr/2017/10/10/15-things-everyone-should-know-about-quantum-computing/#497b1e011f73

Eye-Tracking Technology

By Gokul Siddharthan J, DCMME Graduate Student Assistant

Eye tracking

Eye tracking dates to 1879 when Louis Émile Javal noticed that readers do not fluently read a text, instead, they make pauses and short movements. His device had physical contact with the pupil. Since then there had been numerous innovators who developed improved versions of the technology. For most of the century, scientists were majorly involved in developing precise and non-invasive eye tracking techniques. Nowadays there are devices that can be worn or a web camera that can capture the image of our eye movements.

There are three phases of eye tracking evolution. First, discovering basic eye movement from 1879-1920. Second, research focusing on factors affecting reading patterns from 1930-1958. Third, improvements in eye recording systems increasing accuracy and ease of measurements from 1970-1998. The bottlenecks in technology use were the costs associated with R&D and materials. Bulky equipment, data storage and processing capabilities were other bottlenecks then.

How does an eye tracker work? It consists of cameras, projectors and algorithms. The projectors create a pattern of near-infrared light on the eyes. The cameras take high-resolution images of the user’s eyes and the pattern. Machine learning, image processing and mathematical algorithms are used to determine the eye position and gaze point. Eye tracking has been used in Samsung’s Iris scanner and Apple’s Face ID. It has also been used in visual attention sequencing and creating heatmaps. The application has used in virtual reality, gaming, medicine, and advertising. The bottlenecks in these applications are in having a more immersive user experience, a better product development, and mass utilization of consumer level devices.

However, the ethical aspects of using eye tracking have to be considered because the potential for privacy intrusion is serious in this space. Moral decisions are affected by what our eyes focus on so tracking eye movements can help in understanding the decision-making process of the user. People’s responses can be influenced by using their eye movements, as a result, the potential to be manipulated is high. Eye movements also reveal insights into how different people think, analyze and process information. It won’t be long enough before people correlate results from eye tracking with criminality.

In the medicinal field, Schizophrenia, Alzheimer’s, PTSD, Eating Disorder all have symptoms that are reflected in eye movements. In fact, one of the basic checks a doctor does on a patient is to check how the pupil reacts when a flashlight is shined on it, reflecting whether there’s any serious problem or not. Changes in pupil size, scan paths and fixation points can assist in determining which gender an individual is attracted to. As technology continues to advance, it could threaten privacy far beyond the limited confines of smartphone or computer screens. Eye tracking has the potential to reveal a lot about device users and human beings portray more than they realize from eye movements

Sources:

http://www.iadt.edu/student-life/iadt-buzz/august-2013/it-does-eye-tracking-undermine-privacy

https://www.aclu.org/blog/national-security/privacy-and-surveillance/privacy-invading-potential-eye-tracking-technology

Cloud-Based Vs Localized Supply Chain Management

By Gokul Siddharthan J, DCMME Graduate Student Assistant

Cloud Based Supply Chain

A cloud-based supply chain management provides numerous advantages over a localized model. The cloud makes the system more efficient, affordable, infinitely scalable, safer and easier to integrate into the current systems. Also, Cloud-based solutions have low initial investments, quick to deploy, continuous upgrades, and low maintenance. The shortfalls of a rigid localized supply chain management make it not suited to the dynamic challenges of today’s business needs.

A localized system limits the innovation that a company can do. The investment that’s required to constantly upgrade and maintain fluctuates and can strain a company’s resources, which can be focused on product innovation. A rigid supply chain may not just hinder the growth but also risk that company’s survival.

Cloud-based systems are more affordable and they come along with capabilities that are expensive in localized systems. The scale of the cloud companies can drive down the prices and increase the capabilities offered as their customer base increases. Such systems require new systems administrators, complex infrastructure, expensive equipment that aren’t feasible for an individual company. Affordability along with a constantly upgraded supply chain management system offers benefits far more than localized models.

These systems are more efficient with the power of automation and data analysis. It can be used to identify and eliminate wastes in the flow of information and goods by making the system more transparent and providing less strain on the budget. The fear of downtime or data loss that could result in loss of profits is eliminated. A cloud-based model has more redundancy and better fail-safe methods that suffer less damage than localized models.

Another big benefit of the cloud model is the transformation can happen at a slower pace. You can select which part of your supply chain is required to go into the cloud to deliver the best value. The migration can be prioritized and implemented at a speed which is deemed comfortable for the organization. The management, maintenance and upgrades can be outsourced to the service provider, and since they are providing it at a large scale the costs are driven down substantially. Other benefits are an easy to use interface, an intuitive user experience, better analytics, all of which can be accessed online from anywhere at any time from any device.

A cloud-based model eliminates most of the routine everyday tasks and is a step forward in continuous improvement, allowing the company to focus on the tasks that truly matter. Most businesses can benefit from updating their supply chain model to cloud-based, thus improving their ability to stand out in the competition.

Sources:

https://www.scmr.com/article/why_supply_chain_leaders_are_moving_to_the_cloud

https://cerasis.com/cloud-based-supply-chain-management/

3D Printing Impact on Supply Chain

 

What is 3D printing

 

3D printing, also known as additive manufacturing – AM (the terms 3D printing and additive manufacturing have become interchangeable), is an additive technology used for making three dimensional solid objects up in layers from a digital file without the need for a mould or cutting tool. 3D printing uses a computer aided design (CAD) to translate the design into a three-dimensional object. The design is then sliced into several two dimensional plans, which instruct the 3D printer where to deposit the layers of material. Additive process, of depositing successive thin layers of material upon each other, producing a final three dimensional product

Impact of 3D printing on the Supply Chain

 

The impact of AM technologies on the global setup of supply chains can be very disruptive. The technology has the potential to eliminate the need for both high volume production facilities and low-level assembly workers, thereby drastically reducing supply chain cost. In terms of impact on inventory and logistics, we can print on demand. Meaning we don’t have to have the finished product stacked on shelves or stacked in warehouses anymore. Whenever we need a product, we just make it. And that collapses the supply chain down to its simplest parts, adding new efficiencies to the system. Those efficiencies run the entire supply chain, from the cost of distribution to assembly and carry, all the way to the component itself, all the while reducing scrap, maximising customisation and improving assembly cycle times.

Image result for Metal 3d printers in supply chain

Traditional supply chain vs AM model

 

The supply chain traditional model is founded on traditional constraints of the industry, efficiencies of mass production, the need for low cost, high volume assembly workers, and so on. But 3D printing bypasses those constraints. 3D printing finds its value in the printing of low volume, customer specific items, items that are capable of much greater complexity than is possible through traditional means. This at once eliminates the need for both high volume production facilities and low level assembly workers, thereby cutting out at least half of the supply chain in a single blow. From that point of view, it is no longer financially efficient to send products across the globe when manufacturing can be done almost anywhere at the same cost or lower. The raw materials today are digital files and the machines that make them are wired and connected, faster and more efficient than ever. And that demands a new model of supply chain . With support local sourcing, the 3D printing technology has the potential to tear established global supply chain structures apart and reassembles it as a new, local system. Furthermore, the technology creates a close relationship between design, manufacturing and marketing. The technology could transform the global supply chain to a globally connected, but totally local supply chain

Image result for Metal 3d printers in supply chain

 

Questions:

What is the future of 3d printing?

What are the challenges in using 3 D printing in supply chains?

 

Sources:

https://www.researchgate.net/…/320927657_The_Impact_of_3D_Printing_Technology

https://www.stratasysdirect.com/resources/infographics/3d-printing-impact-supply-chain