Where are the driverless cars?

By Gokul Siddharthan J, DCMME Graduate Student Assistant

driverless-car-min

There have been many promises by various companies that driverless cars will be ploughing through our streets by the end of this decade. Yet, the progress that has been made until now suggests otherwise. Waymo, the current industry leader in driverless cars, promised by the end of 2018 a driverless-taxi service in Phoenix, Arizona. The plan has been a damp squib. Only a few customers, who have signed non-disclosure agreements, and a few parts of the city have been covered.

Driverless cars have been the flagship application for the power of AI. The progress made in the power of technology has not yet solved the ability to mimic human intelligence. Machine learning is an important component of driverless technologies. Millions of miles of practice have still not taught computers to overcome what is called edge cases. Edge cases are unique situations that happen on the roads, such as a light aircraft landing on a busy road. Though situations may not be that dramatic, such cases cannot be avoided and a driverless car would need to think on its feet to maneuver around. Humans need far less practice. Couple lessons and few hundred miles of practice is enough for anyone to be able to drive around the world, be it a busy road in New Delhi or a gravel track in rural Greece. Driverless technology has many milestones to reach. In September, Morgan Stanley cut the valuation of Waymo by 40%, to $105bn, citing delays in its technology. Some firms in China have even suggested redesigning entire cities for driverless cars is far easier.

Most technologies, what is currently called AI, are both powerful and limited. Progress has been transformative, at the same time, the eventual goal of human-like intelligence seems distant. People need to reduce their expectations and not fall for the usual hyperbole. There is little doubt that driverless cars will be in the future, but the consensus is it’s not imminent.

Source:

https://www.economist.com/leaders/2019/10/10/driverless-cars-are-stuck-in-a-jam

Sustainable Forestry

By Gokul Siddharthan J, DCMME Graduate Student Assistant

timber-in-winter-forest-stock-photo-3038761

When it comes to climate change, one of the primary industries that is singled out apart from fossil fuel and energy companies is the timber industry. Historically, they have been known to contribute to climate change through deforestation. We know that trees are responsible for capturing the carbon in the atmosphere, and the historical rate at which deforestation has been is an alarming cause of concern. Both deforestation and pollution rapidly increase the rate of global warming. However, with the advent of modern technology and increasing social awareness for a push against deforestation, timber companies are arriving at innovative solutions to reduce the carbon footprint and have their operations sustainable for the environment.

The Finnish timber company, Metsä Group, has positioned itself as a “forerunner in sustainable bioeconomy”. After a recent makeover costing $1.3bn it is now known as a bioproducts mill and as such is one of the largest in the world. This plant, near Äänekoski, a town in the centre of the country, consumes 6.5m cubic meters of wood a year. The forest is home to many species of wildlife, and the trees lock away the carbon in the atmosphere. Yet the company claims the mills do the opposite. For every tree harvested, four saplings are planted. The trees are thinned to ensure the best specimen survive, thus improving the overall tree population. Despite the increasing demand for wood, the annual growth of trees in Finland exceeds the volume of felling and natural loss by over 20 million cubic meters.

The mill has an aim to make use of every part of the wood. It has come up with many sustainable initiatives that make use of the textile industry to buy its byproducts. Their operations use some of the latest technology, such as drones to map out large swathes of land digitally. Harvesting an area is done by giant eight-wheeled machines, which fell, trim and cut the trunks to the required size. Information is relayed electronically to the mill to schedule deliveries. The company itself is researching innovative solutions that can help the timber industry reposition as a sustainable player. Such responsibilities and initiatives to the environment reassure the public to replace trust in corporations. There is no limit to their sustainable journey, and these innovations can even shape the future into a more habitable place.

Source:

https://www.economist.com/science-and-technology/2019/10/17/how-to-make-use-of-all-of-a-tree

 

New privacy tools from Google

Google has recently announced new ways to delete your personal data automatically. Google is ahead of its counterparts, Facebook and Twitter, in letting its users delete its data. These new privacy tools allow the user to decide how long to keep your history i.e. until you delete manually, keep for 18 months, or keep for 3 months. Google is looking to expand these tools to all its services. Right now, they have launched for YouTube, soon the ability to delete location data in Google Maps is also coming.

There are upsides and downsides to deleting your data. You won’t receive personalized recommendations to what you view, as this is important to the user experience. Videos that you’ve already seen or articles that you’ve already ready could start showing in your feed. However, one wouldn’t have targeted ads according to their browsing or search history, but the ads will be more irrelevant to your needs. To a cybersecurity paranoid person, these features can be seen as very appealing as they allow you to delete your data indefinitely. Google has announced these tools after a lot of backlash over privacy in the past few years and this is an initiative in the right direction that can incentivize other social media firms to follow. For further reading read the link below,

Googe Data Delete

https://www.nytimes.com/2019/10/02/technology/personaltech/google-data-self-destruct-privacy.html

Augmented Reality in Healthcare

By Gokul Siddharthan J, DCMME Graduate Student Assistant

AR Medicine

Augmented reality has the potential to play a big role in improving the healthcare industry. It has only been a few years since the first implementation of augmented reality in medicine, and it has already filled an important place in the medical routine.

These technologies blend computer-generated images and data with real-world views, making it possible for doctors to visualize bones, muscles, and organs without having to cut open a body. Experts say AR will transform medical care by improving precision during operations, reducing medical errors, and giving doctors and patients a better understanding of complex medical problems.

There are numerous uses of AR in the medical field. Few uses are in describing symptoms, nursing care, surgery, diabetes management, navigation and pharmacy. In a situation where it’s hard to describe the symptom to the doctor, AR can help. There are apps for AR out there that show the simulation of the vision and how it’s harmed by different diseases, helping patients better understand their condition and describing correctly. About 40% of the first intravenous injections fail, and this ratio is even higher in the case of children and elderly patients. An app uses augmented reality projects on the skin to show the patients’ veins. Spinal surgery is a long and difficult process. But with the use of AR, it can reduce the time, cut the risks and improve the results. A startup has created an AR headset for spine surgeons that overlays a 3D model of the CT-scan on the spine, so the surgeon gets some kind of “X-Ray” vision.

There are several benefits for patients and doctors, it reduces the risks associated with minimally invasive surgery. Screens displaying vital statistics and images being delivered by an endoscopic camera could be replaced by AR devices. Patients can use AR for educational purposes to better understand themselves and prepare. There are apps that could help a non-medical person understand the body better. Medical training without risks is a great possibility of using augmented reality. The training is more interactive and combines theory and real-world applications on the screen in front of the eyes.

AR has already shown its value in medicine. It’s only a matter of time to come up with better applications and devices that can be used on a daily basis effectively. As healthcare costs continue to grow, AR will play a vital role in preventing, controlling and curing millions of people.

Sources:

https://thinkmobiles.com/blog/augmented-reality-medicine/

Apple’s advances into heart monitoring

By Gokul Siddharthan J, DCMME Graduate Student Assistant

Apple Watch Heart

In recent years, there are several gadgets that monitor heartbeats. They come in various forms and design, offering functions that weren’t around or even thought of a few years ago. One of the major transformations into smart gadgets has been the wearables segment, e.g. watches. Nowadays, smart watches perform functions that were done by phones a decade ago. They not only provide the time of day, but also the weather, heart monitors, steps taken, messages, emails, GPS, news, phone calls, music, and other apps that are supported by the device.

One device that has stood out from its competition is the Apple Watch compared to its peers, such as Samsung, Fossil, Garmin, and Fitbit. The Apple watch has been rated the best smartwatch for several years. It launched at a time when the popularity of smartwatches wasn’t great. Apple not only introduced the watch but also converted many users of traditional watches and altered its perception. Now Apple has started to make inroads into health care applications. Recent versions of the Apple watch have been able to monitor heartbeats and notify you of irregular heart rhythms, but now the latest version, Watch Series 4, can provide an ECG of the wearer. Users can simply open the app, hit start, hold a finger on the digital crown, and it will take a reading. This feature has been approved by the American Heart Association, and by the FDA too.

A new feature of this watch is its ability to inspect the ECG for signs of a common heart arrhythmia called “atrial fibrillation”, or AFib. It is one of the most common cardiac conditions and occurs when the heart’s upper chambers do not beat in a coordinated fashion. Blood pools in parts of the chambers, forming clots, and such patients are three to five times more likely to have a stroke. AFib occurs in around 2% of the population. The risk of suffering from it increases greatly with age.

The recent “Apple Heart Study”, covering 420,000 patients, looked at the predictive value of the device monitoring for irregular pulses. It found the watch only agreed with a gold-standard method 84% of the time. A study conducted by a research organization contracted by Apple found the app’s algorithm was able to correctly identify 98.3% of true positives and 99.6% of true negatives. These numbers are far better than the rest of the competition. Being approved by the FDA and in several other countries for its applications, shows the extent of technology penetrating into health care. Health care must get ready for the inevitable, which is technology in medicine. There is going to be a torrent of data coming in and it’ll be wise to build the infrastructure to handle it. And Dr Jonathan Mant, a professor of primary care research at the University of Cambridge concludes it is “paradigm-shifting. I just don’t know if it is going to be in a good way or a bad way.”

Sources:

https://www.economist.com/science-and-technology/2019/04/06/can-wearing-your-heart-monitor-on-your-sleeve-save-your-life

https://www.apple.com/newsroom/2018/12/ecg-app-and-irregular-heart-rhythm-notification-available-today-on-apple-watch/

Netflix of Video Games, the start of Cloud-Gaming

By Gokul Siddharthan J, DCMME Graduate Student Assistant

Cloud Gaming

The ability to stream songs and movies over the internet has transformed how we watch movies and listen to songs over the past decade, but the $140bn market for video games hasn’t yet entered the cloud subscription services such as Netflix, Hulu, and others. Recently, Google began tests of a cloud gaming service called “Project Stream”, using a big-budget game, “Assassin’s Creed Odyssey”. The game is computationally heavy and usually runs on consoles and high-end computer systems. But with the computational heavy lifting transferred to Google’s data centres, even a modest laptop can run the game.

Microsoft is due to start testing a similar service called “Project xCloud”. Electronic Arts, a big gaming company with famous titles such as FIFA, has plans for a streaming product of its own. Nvidia, a maker of graphics cards and chips, is testing a similar service. Sony already has a cloud-gaming service called “PlayStation Now”. There are also a few startups in the fray.

The mechanics of cloud gaming involves the game being run on a data centre hundreds of miles away, and the feed relayed to the user. The success of the cloud-gaming services rely on the infrastructure. The computer running the game must react instantly to the user’s input or the game will feel sluggish. If the latency, time taken for a round trip of data from the data centre to the player’s computer, takes more than couple of dozen milliseconds then the user experience will start breaking down, especially when playing high end action games. Connections must be rock solid.

Earlier attempts at cloud-gaming resulted in failed attempts due to an insufficient network infrastructure. But nowadays, many homes are connected to high-speed broadband connections. Firms such as Google and Amazon have data centres present all over the world, and they have the technical expertise to establish such a service. Incumbents such as Microsoft and Sony face a threat from these new entrants. But it’s still too early to predict who will win the battle.

Cloud-gaming appeals for other reasons too. The gaming industry is increasingly making money from users buying digital goods in a game. The marginal cost of generating such digital goods is almost zero, so every sale is a profit. Often the margins on consoles aren’t very profitable so the business model in the gaming industry will see some changes.

People are trained to expect entertainment to be portable, transferable between different devices, and instantly available. The hope is that cloud-gaming will be appealing to consumers, and the industry will have to simply be keeping up to their habits.

 

Machine Learning

By Gokul Siddharthan J, DCMME Graduate Student Assistant

Machine Learning

Machine learning and Artificial Intelligence are part of the same family. In fact, machine learning is a branch of AI-based computer systems that can learn from data, identify patterns, and make decisions without human intervention. When exposed to new data, computer systems can learn, grow, change and develop themselves.

Machine learning and AI is everywhere. There is a high possibility you are using it and don’t even know about it. Some of the instances where machine learning is applied are Google’s self-driving car, fraud detection, online recommendations such as in Amazon, Facebook, Google Ads, Netflix recommendations, and many more. Traditional data analysis was done by trial and error, but this approach isn’t feasible when data becomes large and heterogeneous. Machine learning proposes smart alternatives to analyzing huge volumes of data through fast and efficient algorithms and analysis of real-time data. It is able to produce accurate results and analysis. Other major uses of machine learning are in virtual personal assistants, such as Alexa, Google Home, Siri, in online customer support, where chatbots interact with the customer to present information from the website, in predictions while commuting, Traffic predictions like Google Maps and online transportation networks like Uber, in social media services, such as Facebook’s people you may know, face recognition of uploaded photos. These are a few examples, but there are numerous uses where machine learning has been proving its potential.

So how do machines learn? There are two popular methods, supervised learning and unsupervised learning. About 70 per cent of machine learning is supervised, while unsupervised is around 10-20 per cent. Other less often used methods are semi-supervised and reinforcement learnings. In supervised learning, inputs and outputs are clearly identified, and algorithms are trained using labelled examples. The algorithm receives inputs along with a correct output to find errors. Supervised learning is used in applications where data predict future events, such as fraudulent credit card transactions. Unlike supervised learning, unsupervised learning uses data sets without historical data. It explores surpassed data to find the structure. It works best for transactional data, i.e. in identifying customer segments with certain attributes. Other areas where unsupervised learning is used are online recommendations, identifying data outliers, self-organizing maps.

Google’s chief economist Hal Varian adds, “just as mass production changed the way products were assembled, and continuous improvement changed how manufacturing was done, so continuous experimentation will improve the way we optimize business processes in our organizations.” It’s clear that machine learning is here to stay.

Sources:

https://www.simplilearn.com/what-is-machine-learning-and-why-it-matters-article