Bionic body parts, time travelling, immortality. Sounds very science fiction?
How about self-driving cars, virtual assistants, talking robots. More science fact?
So maybe we can never defy laws of physics, but we certainly can use them to our best interest.
Ever since there has been humans, there has been curiosity to study, create and replicate our nature. Being one of the most efficient, in the lack of a better term, device, it would be unimaginable if we didn’t try to learn about our brains.
The existence of computers and software, gave birth to programming, and thus codes and algorithms are used to help process and mimic the human body – in our ultimate goal of creating robots that could replace us in performing the most mundane of tasks, we are hoping for an easier life where daily tasks were negligible, and focus could be used for greater pleasure or discoveries.
With existence of virtual assistants like Siri and Cortana, many fall into the trap of thinking that these assistants are in a way intelligent. While there have been arguments to what consist of a true AI, in reality most virtual assistants are as intelligent as what has been programmed into them. They are more akin to chatbots, that were pre-programmed with a certain set of functions, to perform a set parameter of tasks, based on certain keywords.
True AI, it is argued, will allow machines to adapt to the human, anticipating and make predictive analysis, performing actions according to the comprehension of tasks. I mean Machine Learning (ML) algorithms.
The technology behind AI
Do enough reading and you’d soon realise there are way more Machine Learning algorithms and projects out there than you’d think. Neural networking, Bayesian networks, reinforcement, decision trees, random forests, classifiers, cluster analysis……
Learning is not an instantaneous event. Learning is a sequential event. From an AI perspective, there are technologies that are useful for different type of things. Some could be focused on analysing interactions of speech, other things could be used for walking, vision acquisition. These are all built on sequences. However, there isn’t quite an algorithm that is useful for all.
Neuramatix is a Malaysian based AI R&D company, founded and led by Robert Hercus and his son Adlan. With a great passion to break through all conventional algorithms in AI, Adlan described their Machine Learning algorithm as an all-purpose brain.
“What we do is more universal – it’s more about creating a high level algorithm that can be applied across all functions, as opposed to others that are very purpose built.”
Adlan was talking more in the context of robotics. In an essence, algorithms aren’t so much programmed as it is created. As Adlan describes it, programming is the expression of everything that has been done. It’s a way to demonstrate that the computer has learnt something.
It’s not so much programming the learning, however. Robert clarifies, “Machines don’t teach themselves yet unfortunately. They can say they train, if you like. They still train by brute force or heuristics or algorithms, they don’t really learn. And to me that isn’t really intelligent. Until you can programme a system to learn by itself and adapt to a changing environment, then you have some form of intelligence.”
Another Malaysian company with ML in its core is Berkshire Media.
Dr. Vala Ali Rohani is the Chief Data Scientist and Head of Data Analytics & Research in Berkshire Media and they have a very different set of approach and variation of the algorithm. “In a simple form, ML is getting computers to learn from past to predict the future. Such systems aim to perform human-like cognitive functions where the outputs are optimised by learning from historical datasets."
In Berkshire Media, ML is used in several projects including Sentiment Analysis (called SentiRobo) and Topic Modelling (called TopiRobo).
SentiRobo is developed to predict sentiment scores on social media using enhanced Naïve Bayes algorithm, which work well with large size of social media content in two or more languages. “In other words, we developed a machine learning algorithm for text mining which is language independent. SentiRobo has being used to intelligently measure the sentiment scores for millions of social media posts.”
TopiRobo is designed to automatically discover topics in social media content. “For this approach, we used an unsupervised topic modelling method that incorporates Latent Dirichlet Allocation (LDA) algorithm to discover the topics in collected datasets. Empirical experiments on social media datasets with over 350,000 records revealed that this approach is quite effective for detecting the topic facets and extracting their dynamics over time.”
Software Connectors Asia, a Singapore based market accelerator, represents Trifacta, another data analysis company with ML at its core. Resident advisor Richard Jones is an ex Cloudera VP with a huge interest in anything data analytics. “Trifacta survey the outcomes using machine learning and AI type prediction framework with smart visualisation. They engage the user of technology with data and what they might want to do with it.”
As Richard puts it, programmers don’t always understand big data in their raw form, and it takes time for analysts. “The nature of data is dirty, ugly and messy.”
Bringing logic to the brain
Neuramatix developed their own algorithm called NeuraBASE, designed to mimic the human brain. While Robert describes it more as a proof of concept, he claims that NeuraBASE algorithm makes machines more adaptive to self-learning by changing the learning environment. “That is a very important capability that we have that other machine learning projects might not. Our ultimate aim is to integrate all these [learnings in different environment] together.”
The way Robert’s machines learn is by a reward system – the machine has to figure out ways to achieve a goal, and when that goal is reached, regardless of efficiency, they will be rewarded. By implementing different goals for the same algorithm, and linking them to the necessary hardware to perform certain tasks, the computer learns. While there are multiple ways to achieve the same goal, reinforcement is used to acknowledge that the machine has learnt something. As it builds up, they run multiple goals operating in parallel.
Take air hockey for example. It’s a 2D function, going left and right. Run the same independent algorithm twice and you could play ping pong in a 3D environment. NeuraBASE is basically a collective of many independent functions.
The beauty in this type of learning if that they can function independently. “A lot of robotics have more functionality added to become more complex. When you have several systems working in parallel, you could have linear growth, whereas a system that adds more and more complex functionality will be facing an exponential growth, making it harder and harder to develop those algorithms.”
Robots vs Human: the rat race
There have been many big names in ML – from IBM’s Deep Blue and Watson, to Google’s AlphaGO and self-driving cars – if they have the budget, they will be working on it.
When AlphoGO beat Lee Sedol, 18 times world champion of Go, it made headlines and created a buzz around tech and non tech fields alike. Not everyone was as impressed though.
“Put it out of the room and ask it to play chess, it wouldn’t be able to learn how to – you would have to teach it from scratch from thousands of histories of game. Whereas a typical human being - you show them a few games and they would know how to play go. They wouldn’t be any good, but they know how to play."
Robert’s aim since the beginning was to create an all-purpose algorithm that could be applied for different situations.
“I’m not that much impressed with AlphaGo before than Deep Blue that played chess. It’s a lot of brute force and there has to have thousands of examples. It is learning where it can map and predict the patterns. I don’t know if you can call that intelligence or not. Even with the space invader game – the computer was learning based on visual patterns and videos.”
These computer train from thousands of video before they have the ability to identify a single neuron as a cat, or a male face for example. “Which is fine, but you can take a young 1-year-old baby and you can show to him once or twice and the next time they see it they will know it’s a cat. They know the difference. They don’t need tens or thousands of examples. A child only needs one or two examples, and they already can recognise animals.”
At the moment, there is no computer in the world yet that’s reached a stage where it can learn based on a few examples and to recognise and adapt. Robert argues that one of the keys to intelligence would be adaptability; most systems that train to do a certain function are not adaptable.
For instance, self-driving cars. At the moment, it could drive relatively problem free without human input down a highway. Take a newspaper blowing in the wind, the car would identify that as a foreign object and put down the brakes to avoid the newspaper. A human would have enough experience to not panic at objects blowing in the wind.
So even automotive AI has a long way to go, it would seem. “If you change temperature, terrain, and the programme can learn to adapt to the changes. Intelligence has to be all about learning and adaptation. Too much intelligence nowadays are pre-programmed, brute force, without adaptation.”
In Robert’s terms, they are replicating, if not downright building, a brain. Just as brain synapses link from one point to another creating a chain of thought; and at each of those individual points, there are multiple choices.
Humans make decisions or have a certain preference based on character, their experience and what they have learnt; they tend to choose the option that they associate with good experience.
Machines in general, may not have that priority. Training a machine with the same data, will probably yield the same results every time, no matter how many times it’s repeated.
The US Defense Advanced Research Projects Agency had high interests in developing these technology. As Dr. Gill Pratt, Program Manager DARPA Robotics Challenge described, “My expectation is that the robots are going to be slow. What we’re looking for right now is for the teams to just do as well as roughly that one-year-old child.”
DARPA Robotics Challenge held from 2012 to 2015 aimed to develop semi-autonomous ground robots that could do "complex tasks in dangerous, degraded, human-engineered environments”, which could show the utility that robots might have in a real disaster scenario.
Neuramatix, it seems, is not in the business of robotics as much as it’s looking to build a brain.
“The DARPA challenge robots probably had a different program/pre-programme separately for each function, rather than a combined intelligence that could do everything. Our programme – although they are independent, you can learn different objectives and goals, but if you work them concurrently they still can work in those environment.”
“Even walking, people have different gaits; these are characteristics they learn. They can all walk, but they just walk in different ways, with subtle differences. People learn differently and they adapt and integrate to their environment.”
“We use one algorithm for all training, whereas others might use several different algorithms for different tasks. With our algorithm, every training could produce different outcomes. These training are learning different patterns but at the end of the day they are all able to perform the task set.”
Data, so much data
In our interview with Pure Storage co-founder John Hayes, he told us that we are looking at about 200PB of data just to train a car to drive itself. While cars are hungry for data, Neuramatix’s algorithm is chomping at a similar capacity. Adlan tells us they are looking at a total of just under a Petabyte of data for their NeuraBASE algorithm.
Development started as early as 2002 for Neuramatix. By 2007, it was quoted that they were using some 800GB memory for genome sequencing. And that’s all just gibberish – only the chemical base ACTG, repeated 6 billion times. They process them, still do that today.
Being a RAM intensive operation, Adlan told me they have a server farm that’s just 700GB of RAM. Everything sits in RAM for quick access to data. They code everything in house, even their database structure is unique to them.
Richard has had discussions with Neuramatix and he was impressed not only by their technology and difference in approach, but also the efficiency of their algorithms. “The way they store their framework means they have their own database engine, with 1/10 of the processing power, and doesn’t need nearly as much storage.”
As Robert said prior, they do not run complex algorithms, rather several simple independent algorithms, pulling from a collective data pool.
The time it takes for a machine to learn up to a functioning level differs based on what is required. As Richard puts it, “it’s not so much of the question of how long will it take a machine to learn, it’s more the question of what is the question you want answer from. It really depends on your environment and what is best suited for your algorithms.”
Back at Dr. Vala’s lab, he echoes their sentiment. There are several parameters that they consider: size of training dataset, ML algorithm used, CPU processing power, memory size and etc.
“The more training dataset, usually leads to more accurate predictions in ML algorithms. In some cases we use over 800,000 records to train our ML algorithms; it doesn’t mean that this wouldn’t work with smaller datasets. It’ suggested to use 70% of whole dataset for training and the remaining 30% of records for testing the applied ML algorithm.”
In Berkshire Media’s case, they value precision. Being strongly tied to analytics, their systems has been subjected to several performance runs.
“SentiRobo succeeded in predicting the sentiment value of mixed English-Malay tweets in two domains, Education and Airport Management, with the accuracy rate of 71% and 79% respectively…Referring to a recent published study which compares ten different commercial Sentiment Analysis (SA) tools all over the world, the average accuracy rate of the studied SA algorithms was around 60%.”
Domain experts were also brought in to evaluate the performance of TopiRobo. “They investigate the detected topics along with assigned keywords and compare the results with their own interpretation about the top topics of studied datasets. The experts’ investigation results revealed that the developed LDA algorithm, successfully detected the main social media topics in studied domains.”
To each his own, Neuramatix’s approach is more akin to teaching humans, incorporating and factoring the flaws created by machine. That process, making it as intelligent as a child, could take years, depending on the underlying hardware and software.
“A lot of AI is from the concept of putting in information, you process it and you output a result. That paradigm is fundamentally wrong for intelligence. Intelligence is about being able to express what you’ve learnt, whereas machines are merely regurgitating what they’ve learnt. If you recall a song you can hear it in your head, but it’s not an output function, it’s a reactivation of the input function. Currently they cannot represent that in neural structures.”
Berkshire Media and Neuramatix may be looking at very different segments of the industry, and indeed focussing on different goals, using ML for different purposes; in essence, they are both playing with data. Big data.
“I like to think of AI as the first big data problem that was ever in existence. Our problem was we called it a mountain of data, and it wasn’t as sexy. But we were dealing with the amount of data that in 2004 what people started working on in 2014.”
Adlan isn’t wrong about big data – except it’s probably more than just the sheer amount of it. The amount of analytics that go behind AI, and it’s potential use cases, make Berkshire Media and Neuramatix more likely associates than rivals.
The nuts and bolts
While the first thing that visualises in people’s head when someone mentions robots is probably, well, an actual metal (or other materials) machinery that has capabilities to do some sort of human function; in actual fact, a lot of ML remains within a software. Whether the hardware can catch up? Well. It may not be there yet, but it’s slowly getting there.
“Typically, there’s a very large gap between what we use and what we need. We want high performance RAMs, and disks just for backup. We load everything into RAM and process it in memory. When we are running the neural network, it’s purely on RAM.”
Being in different focus from Neuramatix, Dr. Vala’s work can be sustained with existing hardwares, but his answer does leave room for hardware improvement.
“In most cases, the existing commodity hardware and facilities are sufficient to run ML algorithms and building the prediction models. Although for Big Data cases, some related infrastructure and analytics environments and technologies such as Hadoop, Spark, … will be required.”
But it’s not just do and chuck it in a server somewhere – regular clean-ups and maintenance are needed.
“One of the essential stages in each data analytics project is Data Cleaning. It’s the process of detecting and correcting corrupt or inaccurate records from a record set, table, or database. After that, this tidy data can be used by ML algorithm for both training and testing.”
Saviour of mankind?
“In the past decade, ML has been used in our daily life for practical speech recognition, self-driving cars, recommendation systems by online merchants, and even predicting the election results. There are many results and findings as we analyse a huge amount of Social Media content every week in Berkshire Media.”
In the information age, data is king. Berkshire Media has made a good prediction about recent Sarawak election by utilizing SNA (Social Network Analysis) techniques and visualizing the SNA graphs by analysing around 90,000 social media records.
More impressively, Dr. Vala told us about adopting social data with other data sources to predict an election outcome in a small constituency in Malaysia with 97% accuracy.
Being in the big data industry for a long time, Richard also sees an influx of market offerings that use Machine Learning for data analytics purposes.
“There are apps after apps after apps that are collectively AI. Even chatbots – while the conversation aspect of it is not quite intelligent yet, the actions that happen behind it – drawing data/information – is. There is now a myriad of wearable devices that not only records your activities, managing your daily life. These algorithms are there to help you manage it.”
Drawing on a few examples, there has been x.ai - a personal assistant that helps schedule meetings; Charlie AI – does research of companies and profiles for you – so you can know more about people you talk to and hold an intelligent conversation; as well as ROSS - law research AI – to do the research function. Considering the amount of data lawyers sift through everyday, ROSS collects foundation information to create a more intelligent search and response.
While there are many speculated uses for AI from research to military, these bots aren’t exactly intelligent, intuitive, or predictive. As Richard puts it, “There are advancements in reasoning. I don’t yet think there will be a judge that is a machine.”
When everything goes wrong
“As long as it’s software, it will be susceptible to malware.” As Robert said. A fully hardware system would probably be less susceptible to being compromised, but as we’ve discussed, there are still limitations.
Adlan isn’t quite worried though.
“We’re so quick that retraining and relearning isn’t a problem.”
“When we are doing natural language – we could process a book like the hobbit in about 2 minutes. We can remove all punctuation and categorise words and sentences in about 2 minutes. We’re fast – and we’re faster than a lot of the tools out there.”
Even discounting the speed, Robert’s already got systems in place for Neuramatix.
“In a naturally intelligent system, there is already replication/duplication in built. Any naturally intelligent system should have natural in built redundancy and variation, so that it is able to expect the same thing in multiple ways.”
Dr. Vala is just as confident. Well, I guess if you’re in the field of data in general, you’re prepared anyway; let alone these professionals in ML for easily more than 10 years.
“In most cases, ML algorithms get trained and tested in the offline mode. So, lost connection would not be a serious concern in this context because the odds are quite low.”
“Certainly security and backup mechanisms are two main elements that must be considered in all data analytics projects. We follow well-designed procedures to securing our ML systems, in addition to automatic backup mechanisms which regularly archive the different versions of our datasets and programing modules.”
Catch-22: to kill or not to kill, or is that not the question?
It has been speculated to how a machine will learn human ethics. And if they were presented with the Trolley Problem, will they be designed to kill the owner?
Well, according to Richard, maybe.
“Robots can only be as ethical as the humans that made them. They will never have emotions, so their decisions and outcomes will always be the same – won’t be influenced by external factors. But a robot will always see and comprehend thing that they were taught, in that sense, there will be less error that surfaces.”
Robert is rather relieved that robots won’t have emotions. “If they had emotion they won’t have the desire to go to work.” He laughs.
Robert is more interested in the practicality of it. Maybe it’s not whether a machine, like an autonomous car would kill the driver, but rather legally, who’s at fault?
“It’s interesting because you need to figure out the legal side of things – would it be the owner’s fault? manufacturers fault? Insurance companies might put premiums up tremendously, or the insurance might be built in with the automated car. It will be interesting to watch the legal implications.”
But he doesn’t think it’s something that we have to worry about in the near future.
“Although I think they are still a long way off from where they pretend to be.”
Richard is not fully convinced about having a car drive him though. “It’s not just automotive – it’s autonomous everything. The industry is doing an awful lot to convince people it’s safe to be driven by a machine. But I like my own right foot, an awful lot, and I would not trust a machine driving me around.”
Deep mind and the Ethics board
Maybe it’s not as much of how a robot will learn ethics, but how the regulations and policies would be set in place.
Google established the AI ethics board after acquiring DeepMind in 2014, after the cofounders of this £400 million AI lab insisted that they would only agree to the acquisition if Google fully consider the ethics of the technology it was buying into.
“Robots are meant to talk to each other – otherwise you won’t get the full benefits of using AI anyway. But if you have that function you have to be careful – how much intelligence or function are you going to give the machine?"
"Can they communicate with each other or do you limit what they can communicate? Robots listening to conversation – does it share that information with other robots – it becomes a privacy issue.”
With regulations there will be control measures. The question is how and where should they strike a balance, between security, and convenience?
“You have to think about programming the control system and yet it still can drive a car. You have to ask the question: What function are you going to give the robot? What control systems will you implement? They won’t do everything.”
Dr. Vala recognises the importance of this, and he believes humans need to be ready for this. He welcomes the initiative taken by Google DeepMind, and recognises many industry big players who are looking to convey the importance of this too.
“This is the age of robots and Artificial Intelligence. We need to be ready to interact with very intelligent robots in our daily lives. Definitely by that time, ethics and safety are among serious concerns that scientists need to address before it is too late. It is definitely important to address this issue before we found ourselves trapped by highly intelligent machines. Such ethical considerations and rules will make the coming high-tech future manageable..”
Well, he’s still glad that robots are not as advanced yet though. “Fortunately, according to some recent studies published in Independent websites, the human brain is still the most complex structure in the universe.”
Robots take over the world! (or maybe not)
“I think that’s complete nonsense and a lot of negativity towards AI” Robert’s reaction was immediate.
Hard to fault cynics, since our silver screen is showing more sci-fi movies like i-Robot, Ex-Machina and Her; humans were born to dream and speculate. Given the popularity of doomsday scenarios, apocalypse, zombies and robots – put them together and you have a human race who enjoys morbid fiction.
That’s not to say it can be as easily translated to the real life though.
“What we have right now it’s not really suited to adapt to anything, so we don’t really have anything to fear at the moment, let alone have the ability to act according to their thoughts – they don’t really have thoughts and programmed by brute force, it’s not intelligent.”
“Whether it will turn against you – well, depends whether you keep it locked up or not.” he joked. “If it was a very intelligent brain, you probably might keep it in a cloud somewhere, and all it can do is attached to an arm, and that’s a robot with one function. There may be other robots with other functions like cook, or be a nanny. And being AI they probably have functions where they can talk to each other, sharing information, you don’t know.”
“But it doesn’t mean they will rebel because to rebel they will need to have desire. And other robots, depends on how they are programmed with desire, they might be programmed with personality and learn. At the end of the day for someone to want to take over something, it’s usually animal instinct. It’s not higher cognitive function, it’s lower cognitive function. The question is whether you are able to programme the lower cognitive function, for a robot to experience fear, anger and all these emotions.”
The future of AI
We see successes in cars driving with minimal human input. We see failure in robots attempting to write a sci fi script. While development is slower than the speed of humans expecting a certain change, we can see AI slowly taking form, and integrating to our daily lives.
In fact, in a recent interview with Brocade VP of Storage Networking Jack Rondoni, he told us Brocade was doing R&D into ML as well - for the purpose of using analytics to provision networking and storage automatically, depending on demand. While there is no date set on that yet, R&D is already on the way.
“Intelligence is a lot about creativity. All creativity, inspiration or vision comes from those choices and those choices at those points that build up intelligence. Machines should learn through observation and make decisions through relative association.”
“It all comes down to their experience, their observation, and the amount of information – and the brain has a limited amount of information. But an artificially created brain has no limits on the amount of information that it carries. They can probably find patterns or associations of different fields, then you can apply that to biology, to IT, to bioengineering; you have the ability to discover things and create hypothesis based on these patterns of relationships. Humans don’t have the capacity to make all these links and associate patterns.”
Moore’s law has dictated that technology in general moves exponentially fast. The same can be said for the development of AI, and Dr. Vala is excited about that.
“[Technology moves] incredibly fast! Flying to Mars in just 30 minutes may be possible using the Laser Propulsion System!!!” he exclaimed, pointing me towards the above mentioned article.
“It’s quite predictable that in the very coming future, the global community will witness a tremendous growth of Artificial Intelligence, smart apps, and digital assistants. Machine Learning will also dominate the mobile market, as well as territories of drones and self-driving vehicles.”
Building on his knowledge about virtual assistants and where data analytics are being used, Richard speculated an integrated world. “Future, robots could qualify transactions for sales function, even have VIP frameworks where they can recommend and act for an individual – like order a taxi and check in hotels while you’re still on a flight, remember your preference. Prompt and ask questions – perhaps what you want to do for dinner, and help book through applications and links to the hotel system.”
Are we there yet?
The most commonly known and perhaps the closest to commercial viability is the trend of self-driving cars. Robert thinks they have a long way yet though, despite the hype. Even if certain parts of the function is available, it’s not frequently used. Some quote ease of operations to be an issue, others simply cannot afford to own that technology in their daily life yet.
“At this moment in time, even a Tesla learning how to park takes a long time. It’s impressive what it can do and all, but people aren’t really using it yet, and the technology isn’t intuitive enough that people can figure out instantly how to work it. The true reach of technology is when everyone has it and are using it.”
At the moment, Google’s self-driving cars have to map out the highway and every obstacle manually, before they can set the car to drive down those roads. So the cars will know what should be there, and anything new will be perceived as an obstacle.
Adlan and Robert speculates, that when AI is fully matured, cars will be able to communicate with each other, as well as maybe the traffic lights – so it wouldn’t take a long time for cars to go through a set of traffic lights, as they can move at the same time.
Recent attempts with releasing AI into the web, was technologically successful, but speaks volumes about human nature and the readiness to adapt to these technologies. Tay.ai was shut down within 16 hours of its launch, because people figured out how Tay was learning, and quickly latched on to mischievous attempts to destroy Tay’s reputation – if it even matters.
A human brain learns from different response and the brain provides choices for a person to react to different situations. It’s the same with chatbots. A chatbot will learn people’s responses, choose the most frequent response and use that as an output, provided it’s something that it’s come across before. In the case of Tay, while it may be all good fun for people interacting with the bot, but if a machine was to learn in these environments, we might not get our intelligent assistants just yet.
The human condition
Putting aside the few in the web looking for some mischief, humans in general get stuck in a trap. A pattern trap. Robert proposed a fun social experiment.
“You go anywhere with a group of people – just mention something about your pet dog, cat or even parrot. Just one sentence. And I will guarantee you for the next 20 minutes everyone will be talking about their pets. One sentence will lead to the next to the next to the next and before you know it everyone will be involved. they will go on and on about their cat. Try it. They are stuck in a pattern trap and they can’t escape it – just set a right trend and they will follow.”
The core mechanics of neural science hasn’t changed in a long time. While there have been claims of making new algorithms, essentially people are making incremental changes and variations of previous algorithms, making it more accurate, more precise. The general direction to ML however, hasn’t really changed.
“The brain is a machine that learns patterns, and having those patterns you learn to give certain response. But if you observe closely, humans are very machine like. Humans are essentially organic robots. So if that’s the case, why should we worry about other robots?”
Moving on from the here and now
It may seem like it’s all good fun, working with robots, that somewhat resembles a drunken escapee on a good day. For it to move forward, it’s not up to the technology companies to develop an algorithm, or provide use cases, or for marketing teams to sell it to partners and end users. The humankind has to be ready to accept these changes, and put serious thoughts into ways of integrating with current system.
Dr. Vala knows that AI and ML is useful – but only if we could make use of it efficiently. Either that or it could spell disaster, if we do not put some serious thought into it.
“Technology is useful as long as it helps us gain actionable insights from these huge datasets that currently surround us.”
“Definitely, Machine Learning is useful when helping us to make data driven strategic decisions toward creating and sustaining a better world for everyone. Otherwise, such efforts will lead to a mass of high-tech intelligent machines with the potential to cause uncontrollable disasters for humankind.”
Richard echoes that sentiment. “Corporation needs to get up to speed. The best case scenario is for legacy to interact with all systems. People need to start thinking about building a digital company. Take DBS for example, it’s a totally different bank, relying on cloud applications.”
“Automation is the future, and people are ready for change; but people lack understanding of the roles and foundation in the future.”
Ultimately, Adlan and Robert thinks the solution will be a simple one.
“The human brain is considered to be one algorithm. The majority of our brain – the structure of our brain, is the same everywhere. The part that is in charge of feelings is the same structure that is in charge of speech, motions, feelings. We believe there should be a common algorithm.”
Over 50 years of R&D into AI, and even longer for other aspects of science and technology – yet we are still left with more questions and mysteries of the universe and the human body. “And one of these days the answer will pop up and it will be simple.” Robert mused.
“Most things are simple. Biology – it all boils down to ACTG; physics – it’s protons, neutrons and electrons. Of course, there are more complex components to it, but the core of it remains, even if we don’t fully understand it. Even mathematics is a base of addition, subtractions, multiplications and divisions – which grows into bigger equations.”
“So you put basic patterns together to build even bigger patterns. It doesn’t matter what you learn, everything is a version of something simpler. And so Machine Learning is getting back to something really basic, and building from there – whether it’s speech or vision or hearing or motion, and we build from there.”
No one brain can learn everything. That’s why there are specialisations in the world, and we have experts in different fields - whether historian or medical expert.
“And that is where AI can theoretically combine everything together. The system will contribute to mankind’s advancement and integration.”