Augmented Reality in Healthcare

By Gokul Siddharthan J, DCMME Graduate Student Assistant

AR Medicine

Augmented reality has the potential to play a big role in improving the healthcare industry. It has only been a few years since the first implementation of augmented reality in medicine, and it has already filled an important place in the medical routine.

These technologies blend computer-generated images and data with real-world views, making it possible for doctors to visualize bones, muscles, and organs without having to cut open a body. Experts say AR will transform medical care by improving precision during operations, reducing medical errors, and giving doctors and patients a better understanding of complex medical problems.

There are numerous uses of AR in the medical field. Few uses are in describing symptoms, nursing care, surgery, diabetes management, navigation and pharmacy. In a situation where it’s hard to describe the symptom to the doctor, AR can help. There are apps for AR out there that show the simulation of the vision and how it’s harmed by different diseases, helping patients better understand their condition and describing correctly. About 40% of the first intravenous injections fail, and this ratio is even higher in the case of children and elderly patients. An app uses augmented reality projects on the skin to show the patients’ veins. Spinal surgery is a long and difficult process. But with the use of AR, it can reduce the time, cut the risks and improve the results. A startup has created an AR headset for spine surgeons that overlays a 3D model of the CT-scan on the spine, so the surgeon gets some kind of “X-Ray” vision.

There are several benefits for patients and doctors, it reduces the risks associated with minimally invasive surgery. Screens displaying vital statistics and images being delivered by an endoscopic camera could be replaced by AR devices. Patients can use AR for educational purposes to better understand themselves and prepare. There are apps that could help a non-medical person understand the body better. Medical training without risks is a great possibility of using augmented reality. The training is more interactive and combines theory and real-world applications on the screen in front of the eyes.

AR has already shown its value in medicine. It’s only a matter of time to come up with better applications and devices that can be used on a daily basis effectively. As healthcare costs continue to grow, AR will play a vital role in preventing, controlling and curing millions of people.

Sources:

https://thinkmobiles.com/blog/augmented-reality-medicine/

Apple’s advances into heart monitoring

By Gokul Siddharthan J, DCMME Graduate Student Assistant

Apple Watch Heart

In recent years, there are several gadgets that monitor heartbeats. They come in various forms and design, offering functions that weren’t around or even thought of a few years ago. One of the major transformations into smart gadgets has been the wearables segment, e.g. watches. Nowadays, smart watches perform functions that were done by phones a decade ago. They not only provide the time of day, but also the weather, heart monitors, steps taken, messages, emails, GPS, news, phone calls, music, and other apps that are supported by the device.

One device that has stood out from its competition is the Apple Watch compared to its peers, such as Samsung, Fossil, Garmin, and Fitbit. The Apple watch has been rated the best smartwatch for several years. It launched at a time when the popularity of smartwatches wasn’t great. Apple not only introduced the watch but also converted many users of traditional watches and altered its perception. Now Apple has started to make inroads into health care applications. Recent versions of the Apple watch have been able to monitor heartbeats and notify you of irregular heart rhythms, but now the latest version, Watch Series 4, can provide an ECG of the wearer. Users can simply open the app, hit start, hold a finger on the digital crown, and it will take a reading. This feature has been approved by the American Heart Association, and by the FDA too.

A new feature of this watch is its ability to inspect the ECG for signs of a common heart arrhythmia called “atrial fibrillation”, or AFib. It is one of the most common cardiac conditions and occurs when the heart’s upper chambers do not beat in a coordinated fashion. Blood pools in parts of the chambers, forming clots, and such patients are three to five times more likely to have a stroke. AFib occurs in around 2% of the population. The risk of suffering from it increases greatly with age.

The recent “Apple Heart Study”, covering 420,000 patients, looked at the predictive value of the device monitoring for irregular pulses. It found the watch only agreed with a gold-standard method 84% of the time. A study conducted by a research organization contracted by Apple found the app’s algorithm was able to correctly identify 98.3% of true positives and 99.6% of true negatives. These numbers are far better than the rest of the competition. Being approved by the FDA and in several other countries for its applications, shows the extent of technology penetrating into health care. Health care must get ready for the inevitable, which is technology in medicine. There is going to be a torrent of data coming in and it’ll be wise to build the infrastructure to handle it. And Dr Jonathan Mant, a professor of primary care research at the University of Cambridge concludes it is “paradigm-shifting. I just don’t know if it is going to be in a good way or a bad way.”

Sources:

https://www.economist.com/science-and-technology/2019/04/06/can-wearing-your-heart-monitor-on-your-sleeve-save-your-life

https://www.apple.com/newsroom/2018/12/ecg-app-and-irregular-heart-rhythm-notification-available-today-on-apple-watch/

Netflix of Video Games, the start of Cloud-Gaming

By Gokul Siddharthan J, DCMME Graduate Student Assistant

Cloud Gaming

The ability to stream songs and movies over the internet has transformed how we watch movies and listen to songs over the past decade, but the $140bn market for video games hasn’t yet entered the cloud subscription services such as Netflix, Hulu, and others. Recently, Google began tests of a cloud gaming service called “Project Stream”, using a big-budget game, “Assassin’s Creed Odyssey”. The game is computationally heavy and usually runs on consoles and high-end computer systems. But with the computational heavy lifting transferred to Google’s data centres, even a modest laptop can run the game.

Microsoft is due to start testing a similar service called “Project xCloud”. Electronic Arts, a big gaming company with famous titles such as FIFA, has plans for a streaming product of its own. Nvidia, a maker of graphics cards and chips, is testing a similar service. Sony already has a cloud-gaming service called “PlayStation Now”. There are also a few startups in the fray.

The mechanics of cloud gaming involves the game being run on a data centre hundreds of miles away, and the feed relayed to the user. The success of the cloud-gaming services rely on the infrastructure. The computer running the game must react instantly to the user’s input or the game will feel sluggish. If the latency, time taken for a round trip of data from the data centre to the player’s computer, takes more than couple of dozen milliseconds then the user experience will start breaking down, especially when playing high end action games. Connections must be rock solid.

Earlier attempts at cloud-gaming resulted in failed attempts due to an insufficient network infrastructure. But nowadays, many homes are connected to high-speed broadband connections. Firms such as Google and Amazon have data centres present all over the world, and they have the technical expertise to establish such a service. Incumbents such as Microsoft and Sony face a threat from these new entrants. But it’s still too early to predict who will win the battle.

Cloud-gaming appeals for other reasons too. The gaming industry is increasingly making money from users buying digital goods in a game. The marginal cost of generating such digital goods is almost zero, so every sale is a profit. Often the margins on consoles aren’t very profitable so the business model in the gaming industry will see some changes.

People are trained to expect entertainment to be portable, transferable between different devices, and instantly available. The hope is that cloud-gaming will be appealing to consumers, and the industry will have to simply be keeping up to their habits.

 

Machine Learning

By Gokul Siddharthan J, DCMME Graduate Student Assistant

Machine Learning

Machine learning and Artificial Intelligence are part of the same family. In fact, machine learning is a branch of AI-based computer systems that can learn from data, identify patterns, and make decisions without human intervention. When exposed to new data, computer systems can learn, grow, change and develop themselves.

Machine learning and AI is everywhere. There is a high possibility you are using it and don’t even know about it. Some of the instances where machine learning is applied are Google’s self-driving car, fraud detection, online recommendations such as in Amazon, Facebook, Google Ads, Netflix recommendations, and many more. Traditional data analysis was done by trial and error, but this approach isn’t feasible when data becomes large and heterogeneous. Machine learning proposes smart alternatives to analyzing huge volumes of data through fast and efficient algorithms and analysis of real-time data. It is able to produce accurate results and analysis. Other major uses of machine learning are in virtual personal assistants, such as Alexa, Google Home, Siri, in online customer support, where chatbots interact with the customer to present information from the website, in predictions while commuting, Traffic predictions like Google Maps and online transportation networks like Uber, in social media services, such as Facebook’s people you may know, face recognition of uploaded photos. These are a few examples, but there are numerous uses where machine learning has been proving its potential.

So how do machines learn? There are two popular methods, supervised learning and unsupervised learning. About 70 per cent of machine learning is supervised, while unsupervised is around 10-20 per cent. Other less often used methods are semi-supervised and reinforcement learnings. In supervised learning, inputs and outputs are clearly identified, and algorithms are trained using labelled examples. The algorithm receives inputs along with a correct output to find errors. Supervised learning is used in applications where data predict future events, such as fraudulent credit card transactions. Unlike supervised learning, unsupervised learning uses data sets without historical data. It explores surpassed data to find the structure. It works best for transactional data, i.e. in identifying customer segments with certain attributes. Other areas where unsupervised learning is used are online recommendations, identifying data outliers, self-organizing maps.

Google’s chief economist Hal Varian adds, “just as mass production changed the way products were assembled, and continuous improvement changed how manufacturing was done, so continuous experimentation will improve the way we optimize business processes in our organizations.” It’s clear that machine learning is here to stay.

Sources:

https://www.simplilearn.com/what-is-machine-learning-and-why-it-matters-article

What is Quantum Computing?

By Gokul Siddharthan J, DCMME Graduate Student Assistant

The birth of quantum physics was in the early 20th century, with renowned scientist such as Albert Einstein, Werner Heisenberg making significant contributions in the field. But quantum computing as a discipline emerged only in the 1970s and 1980s. In the 1990s, algorithms were processed faster in quantum computing, leading to an increased interest in the field. Additional discoveries eventually led to a better understanding of how to build real systems that could implement quantum algorithms and correct for errors.

Quantum Computing

We see the benefits of classical computing in our everyday lives. Most of the applications and devices that are ubiquitous in the world today are run on classical computing principles. However, there are limitations that today’s systems will never be able to solve. For challenges above a certain scale and complexity, there isn’t enough computational power on Earth. To stand a chance to solve these complex problems, we need a new kind of computing system that scales exponentially as the complexity grows.

Quantum computing is different from classical computing at a fundamental level. In classical computing, information is processed and stored in bits, 0s and 1s. Millions of bits work together to create the results you see every day. In quantum computing, different physical phenomena are used to manipulate information. These phenomena are superposition, entanglement, and interference. To accomplish this, we rely on different physical devices, quantum bits, or qubits. A qubit is a counterpart to the bit in classical computing. Just as a bit is the basic unit of information in a classical computer, a qubit is the basic unit of information in a quantum computer.

So how is information stored by qubits? A number of elemental particles such as electrons and photons can be used, with either their charge or polarization act as a representation of 0s and 1s. Each of these particles is known as qubits. The nature and behaviour of these particles form the basis of quantum computing. The two most relevant and popular aspects of quantum physics are superposition and entanglement. Superposition is the term used to describe the quantum state where particles can exist in multiple states at the same time and allow quantum computers to look at many different variables at the same time.

Qubit

The power of quantum computing is unimaginable. A quantum computer comprised of 500 qubits has the potential to do 2^500 calculations in a single step. 2^500 is infinitely more atoms than there are in the known universe. This is true parallel processing. Classical computers today that have so-called parallel processors, still only truly do one thing at a time. There are just two or more of them doing it. Classical computers are better at some tasks than quantum computers such as email, spreadsheets, desktop publishing, etc. The intent of quantum computers is to be a different tool to solve different problems, not to replace classical computers.

Quantum computers are great for solving optimization problems from figuring out the best way to schedule flights at an airport to determine the best delivery routes for the FedEx truck. Google announced it has a quantum computer that is 100 million times faster than any classical computer in its lab. Every day, we produce 2.5 exabytes of data. This is equivalent to the content on 5 million laptops. Quantum computers will make it possible to process the amount of data we’re generating in the age of big data. Rather than use more electricity, quantum computers will reduce power consumption anywhere from 100 to 1000 times because quantum computers use quantum tunnelling. IBM’s computer Deep Blue defeated chess champion Garry Kasparov in 1997. It was able to gain a competitive advantage because it examined 200 million possible moves each second. A quantum machine would be able to calculate 1 trillion moves per second. Google has stated publicly that it will make a viable quantum computer in the next 5 years by launching a 50-qubit quantum computer. Top supercomputers can still manage everything a 5-20 qubit quantum computer can but will be surpassed by a machine with 50 qubits.

Though a viable and true quantum computer is still not a reality, the race is on with many companies offering quantum machines. Quantum computing is no longer in the distant future.

Sources:

https://whatis.techtarget.com/definition/qubit

https://www.research.ibm.com/ibm-q/learn/what-is-quantum-computing/

https://www.forbes.com/sites/bernardmarr/2017/10/10/15-things-everyone-should-know-about-quantum-computing/#497b1e011f73

Eye-Tracking Technology

By Gokul Siddharthan J, DCMME Graduate Student Assistant

Eye tracking

Eye tracking dates to 1879 when Louis Émile Javal noticed that readers do not fluently read a text, instead, they make pauses and short movements. His device had physical contact with the pupil. Since then there had been numerous innovators who developed improved versions of the technology. For most of the century, scientists were majorly involved in developing precise and non-invasive eye tracking techniques. Nowadays there are devices that can be worn or a web camera that can capture the image of our eye movements.

There are three phases of eye tracking evolution. First, discovering basic eye movement from 1879-1920. Second, research focusing on factors affecting reading patterns from 1930-1958. Third, improvements in eye recording systems increasing accuracy and ease of measurements from 1970-1998. The bottlenecks in technology use were the costs associated with R&D and materials. Bulky equipment, data storage and processing capabilities were other bottlenecks then.

How does an eye tracker work? It consists of cameras, projectors and algorithms. The projectors create a pattern of near-infrared light on the eyes. The cameras take high-resolution images of the user’s eyes and the pattern. Machine learning, image processing and mathematical algorithms are used to determine the eye position and gaze point. Eye tracking has been used in Samsung’s Iris scanner and Apple’s Face ID. It has also been used in visual attention sequencing and creating heatmaps. The application has used in virtual reality, gaming, medicine, and advertising. The bottlenecks in these applications are in having a more immersive user experience, a better product development, and mass utilization of consumer level devices.

However, the ethical aspects of using eye tracking have to be considered because the potential for privacy intrusion is serious in this space. Moral decisions are affected by what our eyes focus on so tracking eye movements can help in understanding the decision-making process of the user. People’s responses can be influenced by using their eye movements, as a result, the potential to be manipulated is high. Eye movements also reveal insights into how different people think, analyze and process information. It won’t be long enough before people correlate results from eye tracking with criminality.

In the medicinal field, Schizophrenia, Alzheimer’s, PTSD, Eating Disorder all have symptoms that are reflected in eye movements. In fact, one of the basic checks a doctor does on a patient is to check how the pupil reacts when a flashlight is shined on it, reflecting whether there’s any serious problem or not. Changes in pupil size, scan paths and fixation points can assist in determining which gender an individual is attracted to. As technology continues to advance, it could threaten privacy far beyond the limited confines of smartphone or computer screens. Eye tracking has the potential to reveal a lot about device users and human beings portray more than they realize from eye movements

Sources:

http://www.iadt.edu/student-life/iadt-buzz/august-2013/it-does-eye-tracking-undermine-privacy

https://www.aclu.org/blog/national-security/privacy-and-surveillance/privacy-invading-potential-eye-tracking-technology

Cloud-Based Vs Localized Supply Chain Management

By Gokul Siddharthan J, DCMME Graduate Student Assistant

Cloud Based Supply Chain

A cloud-based supply chain management provides numerous advantages over a localized model. The cloud makes the system more efficient, affordable, infinitely scalable, safer and easier to integrate into the current systems. Also, Cloud-based solutions have low initial investments, quick to deploy, continuous upgrades, and low maintenance. The shortfalls of a rigid localized supply chain management make it not suited to the dynamic challenges of today’s business needs.

A localized system limits the innovation that a company can do. The investment that’s required to constantly upgrade and maintain fluctuates and can strain a company’s resources, which can be focused on product innovation. A rigid supply chain may not just hinder the growth but also risk that company’s survival.

Cloud-based systems are more affordable and they come along with capabilities that are expensive in localized systems. The scale of the cloud companies can drive down the prices and increase the capabilities offered as their customer base increases. Such systems require new systems administrators, complex infrastructure, expensive equipment that aren’t feasible for an individual company. Affordability along with a constantly upgraded supply chain management system offers benefits far more than localized models.

These systems are more efficient with the power of automation and data analysis. It can be used to identify and eliminate wastes in the flow of information and goods by making the system more transparent and providing less strain on the budget. The fear of downtime or data loss that could result in loss of profits is eliminated. A cloud-based model has more redundancy and better fail-safe methods that suffer less damage than localized models.

Another big benefit of the cloud model is the transformation can happen at a slower pace. You can select which part of your supply chain is required to go into the cloud to deliver the best value. The migration can be prioritized and implemented at a speed which is deemed comfortable for the organization. The management, maintenance and upgrades can be outsourced to the service provider, and since they are providing it at a large scale the costs are driven down substantially. Other benefits are an easy to use interface, an intuitive user experience, better analytics, all of which can be accessed online from anywhere at any time from any device.

A cloud-based model eliminates most of the routine everyday tasks and is a step forward in continuous improvement, allowing the company to focus on the tasks that truly matter. Most businesses can benefit from updating their supply chain model to cloud-based, thus improving their ability to stand out in the competition.

Sources:

https://www.scmr.com/article/why_supply_chain_leaders_are_moving_to_the_cloud

https://cerasis.com/cloud-based-supply-chain-management/