--> -->

The Hanson Regan Blog

The Role of Trust in Human-Robot Interaction

The Role of Trust in Human-Robot Interaction

As robots become increasingly common in a wide variety of domains—from military and scientific applications to entertainment and home use—there is an increasing need to define and assess the trust humans have when interacting with robots. In human interaction with robots and automation, previous work has discovered that humans often have a tendency to either overuse automation, especially in cases of high workload, or underuse automation, both of which can make negative outcomes more likely. Frthermore, this is not limited to naive users, but experienced ones as well. Robotics brings a new dimension to previous work in trust in automation, as they are envisioned by many to work as teammates with their operators in increasingly complex tasks. In this chapter, our goal is to highlight previous work in trust in automation and human-robot interaction and draw conclusions and recommendations based on the existing literature. We believe that, while significant progress has been made in recent years, especially in quantifying and modeling trust, there are still several places where more investigation is needed.

As robots become increasingly common in a wide variety of domains—from military and scientific applications to entertainment and home use—there is an increasing need to define and assess the trust humans have when interacting with robots. In human interaction with robots and automation, previous work has discovered that humans often have a tendency to either overuse automation, especially in cases of high workload, or underuse automation, both of which can make negative outcomes more likely. Frthermore, this is not limited to naive users, but experienced ones as well. Robotics brings a new dimension to previous work in trust in automation, as they are envisioned by many to work as teammates with their operators in increasingly complex tasks. In this chapter, our goal is to highlight previous work in trust in automation and human-robot interaction and draw conclusions and recommendations based on the existing literature. We believe that, while significant progress has been made in recent years, especially in quantifying and modeling trust, there are still several places where more investigation is needed.

Robots and other complex autonomous systems offer potential benefits through assisting humans in accomplishing their tasks. These beneficial effects, however, may not be realized due to maladaptive forms of interaction. While robots are only now being fielded in appreciable numbers, a substantial body of experience and research already exists characterizing human interactions with more conventional forms of automation in aviation and process industries.

In human interaction with automation, it has been observed that the human may fail to use the system when it would be advantageous to do so. This has been called disuse (underutilization or under-reliance) of the automation [97]. People also have been observed to fail to monitor automation properly (e.g. turning off alarms) when automation is in use, or they accept the automation’s recommendations and actions when inappropriate [7197]. This has been called misuse, complacency, or over-reliance. Disuse can decrease automation benefits and lead to accidents if, for instance, safety systems and alarms are not consulted when needed. Another maladaptive attitude is automation bias  [33557788112], a user tendency to ascribe greater power and authority to automated decision aids than to other sources of advice (e.g. humans). When the decision aid’s recommendations are incorrect, automation bias may have dire consequences [2788789] (e.g. errors of omission , where the user does not respond to a critical situation, or errors of commission, where the user does not analyze all available information but follows the advice of the automation).

Both naïve and expert users show these tendencies. In [128], it was found that skilled subject matter experts had misplaced trust in the accuracy of diagnostic expert systems. (see also [127]). Additionally the Aviation Safety Reporting System contains many reports from pilots that link their failure to monitor to excessive trust in automated systems such as autopilots or FMS [90119]. On the other hand, when corporate policy or federal regulations mandate the use of automation that is not trusted, operators may “creatively disable” the device [113]. In other words: disuse the automation.

Studies have shown [6492] that trust towards automation affects reliance (i.e. people tend to rely on automation they trust and not use automation they do not trust). For example, trust has frequently been cited [5693] as a contributor to human decisions about monitoring and using automation. Indeed, within the literature on trust in automation, complacency is conceptualized interchangeably as the overuse of automation, the failure to monitor automation, and lack of vigilance [66796]. For optimal performance of a human-automation system, human trust in automation should be well-calibrated. Both disuse and misuse of the automation has resulted from improper calibration of trust , which has also led to accidents [5197].

In [58], trust is conceived to be an “attitude that an agent (automation or another person) will help achieve an individual’s goals in a situation characterized by uncertainty and vulnerability.” A majority of research in trust in automation has focused on the relation between automation reliability and operator usage, often without measuring the intervening variable, trust. The utility of introducing an intervening variable between automation performance and operator usage, however, lies in the ability to make more precise or accurate predictions with the intervening variable than without it. This requires that trust in automation be influenced by factors in addition to automation reliability/performance. The three dimensional (Purpose, Process, and Performance) model proposed by Lee and See [58], for example, presumes that trust (and indirectly, propensity to use) is influenced by a person’s knowledge of what the automation is supposed to do (purpose), how it functions (process), and its actual performance. While such models seem plausible, support for the contribution of factors other than performance has typically been limited to correlation between questionnaire responses and automation use. Despite multiple studies of trust in automation, the conceptualization of trust and how it can be reliably modeled and measured is still a challenging problem.

In contrast to automation where system behavior has been pre-programmed and the system performance is limited to the specific actions it has been designed to perform, autonomous systems/robots have been defined as having intelligence-based capabilities that would allow them to have a degree of self governance, which enables them to respond to situations that were not pre-programmed or anticipated in the design. Therefore, the role of trust in interactions between humans and robots is more complex and difficult to understand.

In this chapter, we present the conceptual underpinnings of trust in Sect. 8.2, and then discuss models of, and the factors that affect, trust in automation in Sects. 8.3 and 8.4, respectively. Next, we will discuss instruments for measuring trust in Sect. 8.5, before moving on to trust in the context of human-robot interaction (HRI) in Sect. 8.6 both in how humans influence robots, and vice versa. We conclude in Sect. 8.7 with open questions and areas of future work.

 

Source: The Role of Trust in Human-Robot Interaction

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Quantum algorithm could help AI think faster

Quantum algorithm could help AI think faster

One of the ways that computers think is by analysing relationships within large sets of data. An international team has shown that quantum computers can do one such analysis faster than classical computers for a wider array of data types than was previously expected.

One of the ways that computers think is by analysing relationships within large sets of data. An international team has shown that quantum computers can do one such analysis faster than classical computers for a wider array of data types than was previously expected.

 

The team's proposed quantum linear system  is published in Physical Review Letters. In the future, it could help crunch numbers on problems as varied as commodities pricing, social networks and chemical structures.

"The previous quantum algorithm of this kind applied to a very specific type of problem. We need an upgrade if we want to achieve a quantum speed-up for other data," says Zhikuan Zhao, corresponding author on the work.

The first quantum linear system algorithm was proposed in 2009 by a different group of researchers. That algorithm kick-started research into quantum forms of machine learning, or artificial intelligence.

A linear system algorithm works on a large  of data. For example, a trader might be trying to predict the future price of goods. The matrix may capture historical data about price movements over time and data about features that could be influencing these prices, such as currency exchange rates. The algorithm calculates how strongly each feature is correlated with another by 'inverting' the matrix. This information can then be used to extrapolate into the future.

"There is a lot of computation involved in analysing the matrix. When it gets beyond say 10,000 by 10,000 entries, it becomes hard for classical computers," explains Zhao. This is because the number of computational steps goes up rapidly with the number of elements in the matrix: every doubling of the matrix size increases the length of the calculation eight-fold.

The 2009 algorithm could cope better with bigger matrices, but only if their data is sparse. In these cases, there are limited relationships among the elements, which is often not true of real-world data. Zhao, Prakash and Wossnig present a  that is faster than both the classical and the previous quantum versions, without restrictions on the kind of data it crunches.

As a rough guide, for a 10,000 square matrix, the classical algorithm would take on the order of a trillion computational steps, the first quantum algorithm some tens of thousands of steps and the new quantum algorithm just hundreds of steps. The algorithm relies on a technique known as quantum singular value estimation.

There have been a few proof-of-principle demonstrations of the earlier quantum linear system algorithm on small-scale quantum computers. Zhao and his colleagues hope to work with an experimental group to run a proof-of-principle demonstration of their algorithm, too. They also want to do a full analysis of the effort required to implement the algorithm, checking what overhead costs there may be.

To show a real quantum advantage over the classical algorithms will need bigger quantum computers. Zhao estimates that "We're maybe looking at three to five years in the future when we can actually use the hardware built by the experimentalists to do meaningful  computation with application in artificial intelligence."


Source: Phys.org

 

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

There Are Over 1,000 Alternatives to Bitcoin You’ve Never Heard Of

There Are Over 1,000 Alternatives to Bitcoin You’ve Never Heard Of

Bitcoin gets all the attention, especially since it recently rocketed towards $20,000. But many other cryptocurrencies exist, and more are being created at an accelerating rate. A quick look at coinmarketcap.com shows over 1,400 alternatives to Bitcoin (as of this writing), with a combined value climbing towards $1 trillion. So if Bitcoin is so amazing, why do these alternatives exist? What makes them different?

Bitcoin gets all the attention, especially since it recently rocketed towards $20,000. But many other cryptocurrencies exist, and more are being created at an accelerating rate. A quick look at coinmarketcap.com shows over 1,400 alternatives to Bitcoin (as of this writing), with a combined value climbing towards $1 trillion. So if Bitcoin is so amazing, why do these alternatives exist? What makes them different?

The easy answer is that many are simply copycats trying to piggyback on Bitcoin’s success. However, a handful have made key improvements on some of Bitcoin’s drawbacks, while others are fundamentally different, allowing them to perform different functions. The far more complicated—and fascinating—answer lies in the nitty-gritty details of blockchain, encryption, and mining.

To understand these other cryptocurrencies, Bitcoin’s shortcomings need to first be understood, as the other currencies aim to pick up where Bitcoin falls short.

The Problems With Bitcoin

Bitcoin’s block size is only 1 MB, drastically limiting the number of transactions each block can hold. With the pre-programmed time limit of 10 minutes per block being added, this gives a theoretical maximum of 7 transactions per second. Compared with Visa and PayPal’s significantly higher transactions per second, for example, Bitcoin can’t compete, and with the popularity of Bitcoin soaring, the problem is going to get worse. As of now, around 200,000 transactions are backlogged.

Bitcoin’s scalability problem is also likely to make mining more difficult and increase mining fees. Adding blocks to the blockchain requires doing an alarming amount of computation to find the solution to the SHA-256 cryptographic hash algorithm, for which the miner is rewarded with a geometrically decreasing predetermined amount of Bitcoins, currently at 12.5 per block.

However, each new block takes more computing than the last, meaning it becomes more difficult for less reward. To help offset this, miners can charge fees, and with it becoming more difficult to make a profit, the fees are only going to go up.

Because of the computing power needed to process each block, it has been estimated that each transaction requires enough electricity to power the average home for nine days. If this is true, and if Bitcoin continues to grow at the same rate, some have predicted it will reach an unsustainable level within a decade.

Furthermore, Bitcoin’s blockchain has only one purpose: to handle Bitcoin. Given the complexity of the system, it could be doing much more. Also, Bitcoin is not entirely anonymous. For any given Bitcoin address, the transactions and the balance can be seen, as they are public and stored permanently on the network. The details of the owner can be revealed during a purchase.

Altcoins

Ignoring the copycats, several Bitcoin alternatives—or altcoins—have gained popularity. Some of these are a result of changing the Bitcoin code, which is open-source, effectively creating a hard fork in the blockchain and a new cryptocurrency. Others have their own native blockchains.

Hard forks include Bitcoin Cash, Bitcoin Classic, and Bitcoin XT, all three of which increased the block size. XT changed the block size to 8 MB, allowing for up to 24 transactions per second, whereas Classic only increased it to 2 MB. While these two are now terminated due to a lack of community support, Cash is still going. Its major change was to do away with Segregated Witness, which reduces the size of a transaction by removing the signature data, allowing for more transactions per block.

Another Bitcoin derivative is Litecoin. The major changes from Bitcoin are that the creator, Charlie Lee, reduced the block generation time from 10 minutes to 2.5, and instead of using SHA-256, it uses scrypt, which is considered by some to be a more efficient hashing algorithm.

As far as native blockchains go, there are a lot of altcoins.

One of the most popular—at least by market capitalization—is Ethereum. The key element that distinguishes Ethereum from Bitcoin is that its language is Turing-complete, meaning it can be programmed for just about anything, such as smart contracts, not just its currency, Ether. For example, the United Nations has adopted it to transfer vouchers for food aid to refugees, keep track of carbon outputs, etc.

Monero has solved Bitcoin’s privacy issue. It uses ring signatures, which allow for information about the sender to hide among other pieces of data, effectively creating stealth addresses. This makes the Monero blockchain opaque, not transparent like other blockchains. However, programmers have included a “spend” key and a “view” key, which allow for optional transparency if agreed upon for specific transactions.

Dash has avoided Bitcoin’s logjam by splitting the network into two tiers. The first handles block generation done by miners, much like Bitcoin, but the second tier contains masternodes. These handle the new services of PrivateSend and InstantSend, and they add a level of privacy and speed not seen in other blockchains. These transactions are confirmed by a consensus of the masternodes, thus removing them from the computing and time-intensive project of block generation.

IOTA just did away with blocks altogether. It stands for the Internet of Things Application and depends on users to validate transactions instead of relying on miners and their souped-up computers. As a user conducts a transaction, he/she is required to validate two previous transactions, so the rate of validation will always scale with the amount of transactions.

On the other hand, Ripple, which is now one of the top cryptocurrencies by market capitalization, has taken a completely different approach. While other cryptocurrencies are designed to replace the traditional banking system, Ripple attempts to strengthen it by facilitating bank transfers. That is, bank transfers depend on systems like SWIFT, which is expensive and time-consuming, but Ripple’s blockchain can perform the same functions far more efficiently. Over 100 major banking institutions are signed up to implement it.

Bitcoin isn’t going anywhere anytime soon, but budding crypto-enthusiasts should give heed to these competitors and many others, as they may one day replace it as the dominant cryptocurrency.

 

Source: Singularity Hub

 

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Jobs are changing. But two skills will always be in demand

Jobs are changing. But two skills will always be in demand

Fifty years ago, work in developed countries was full of relative certainties. Aside from the periodic recession, most nations were at or near full employment.

Rapid productivity growth was underpinning an improvement in living standards.

A university degree was a meal ticket to a high-paying, secure job as a professional. And for workers with a high school diploma, jobs on manufacturing assembly lines offered a pathway to middle-class prosperity and upward mobility.

Now we live in a much less certain world.

 

So, qhat skills will always be in demand?

Fifty years ago, work in developed countries was full of relative certainties. Aside from the periodic recession, most nations were at or near full employment.

Rapid productivity growth was underpinning an improvement in living standards.

A university degree was a meal ticket to a high-paying, secure job as a professional. And for workers with a high school diploma, jobs on manufacturing assembly lines offered a pathway to middle-class prosperity and upward mobility.

Now we live in a much less certain world.

In many countries, recovery from the latest recession has been gradual and protracted, with unemployment and underemployment coming down only slowly.

Global productivity growth has decelerated sharply, as has pay growth. Cutbacks of private sector benefits and the government safety net are forcing workers to bear more risk than they did in the past.

And while their economic impact has thus far been muted, automation and artificial intelligence raise the spectre of mass displacement of workers.

Performing under pressure

So what are workers to do?

We often hear that workers will have to plan ahead, engage in continuous retraining to upskill themselves, and expect to radically pivot multiple times throughout their careers.

That’s a lot of pressure to lay on a person.

It’s hard to know what types of skills are most important to learn, or how to best position yourself to succeed in the face of changing economic times.

 

Your skills are dynamic

Today the World Economic Forum releases its 2017 Human Capital Report, which evaluates countries on how well they’ve equipped their workforce with the knowledge and skills needed to create value – and be successful – in the global economic system.

At LinkedIn, our vision is to create economic opportunity for every member of the global workforce. That’s why we’ve partnered with the World Economic Forum to contribute to the creation of the 2017 Human Capital Report.

One of the unique advantages of LinkedIn data is the way it can be used to analyse the labour market in an unprecedentedly granular way. We can break down human capital into its most fundamental and critical component unit: skills.

We track the supply and demand of 50,000 distinct skills as provided by our members. This allows us to identify geographically where there is a shortage of particular skills, or where they are in surplus. It allows us to identify which skills are emerging, or growing rapidly, or are persistent over time, or shrinking in popularity.

We can identify the “skills genome” – the unique skills profile – of a city, a job function, or an industry. These types of insights make it possible to advise on which skills are needed when the economy next changes gears.

Our research in this year’s Human Capital Report explores the skills genomes of different university degrees over time.

There are certain skills commonly held by all types of college majors; there are other specialty skills that are unique to specific fields.

So, which skills should you learn?

We found that, across diverse fields of study, there are certain core, cross-functional skills that underpin a career.

These include 1) interpersonal skills, like leadership and customer service, and 2) basic technology skills, like knowing how to use word processing software and manipulate spreadsheets.

Having a strong base in these cross-functional skills is important across industries and job titles – and also gives people the capacity to pivot careers when needed.

Retraining becomes a lot easier when you need to learn just one or two new things, rather than an entire new field of knowledge.

While cross-functional skills are versatile and likely to stand the test of time, they aren’t necessarily the ones that will launch you into a lucrative career off the bat.

Indeed, our data shows that younger generations tend to study more specialized fields than their predecessors, and today’s travel and tourism or international studies majors have more niche and specialized knowledge bases than, say, the history major of yore.

This broader economic trend towards specialization reflects a widening economy that demands more specific skills from the workforce as it grows.

Skills for life

What is clear is that interpersonal skills are unlikely to be rendered obsolete by technological innovation or economic disruptions. In a changing workforce, it's having a strong foundation in these versatile, cross-functional skills that allows people to successfully pivot.

Learning the latest or hottest technology skills shouldn’t come at the expense of investing in the basic, core skills that people need to be successful in the workforce.

Helping governments to better understand, analyse and approach the development of their human capital in this way is our ultimate hope.


Source: We Forum

 

If you’re interested in a career in career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Why AI isn't taking the Human out of Human Resources

Why AI isn't taking the Human out of Human Resources

Looking for a job and getting hired used to be quite simple. Job listings were put into the local newspaper or on a town job board until the position was filled. As quaint and comprehensive as the methods were, the world’s ever-growing population made them obsolete. In today’s world, more jobs are being posted and more candidates are applying for them. Fortunately, technology has advanced with the years and AI was created to help both candidates and employers struggling to meet their needs.

Looking for a job and getting hired used to be quite simple. Job listings were put into the local newspaper or on a town job board until the position was filled. As quaint and comprehensive as the methods were, the world’s ever-growing population made them obsolete. In today’s world, more jobs are being posted and more candidates are applying for them. Fortunately, technology has advanced with the years and AI was created to help both candidates and employers struggling to meet their needs.

LET’S SET THE RECORD STRAIGHT

AI is not coming to take over the world and eliminate humans. Well, at least not in the world of HR Tech. AI is just another tool for those on the daily grind like you and I. Until we create a legitimate artificial consciousness, let’s just agree that AI isn’t going to replace any human jobs. Rather, they will enable humans to pursue more specialized work. They allow Human Resource workers to finally focus on working with humans. Now let’s see how they already are doing just that.

AI Optimizes Job Descriptions

AI is often used to solve a problem before it presents itself. One example is using AI to confirm the utility of the job descriptions presented in your job listings. Often HR personnel has to spend an unfortunate amount of time reading unqualified candidates as a result of a vague or inaccurate description. AI can use data from millions of other job posts to ensure that the information is properly targeted at the candidates who are able to fill your job opening.

AI Eliminates Repetitive Tasks

The problem AI solves isn’t just the boredom that comes with repetition, but the mind-numbing frustration that comes with inconsistencies in resumes and the sifting required to pull the needed information. Instead of having an employee blindly fill out their resume and send it in for consideration, why not have them talk to a chatbot and automate the information you need for your job opening? That’s precisely what many companies are doing today. Here, AI is doing all of the information parsing for the hiring manager so they can focus on the most human part of Human Resources.

They have the opportunity to devote their attention and resources to the interview and the relationship building with their candidates. This process also helps to eliminate any potential bias or discrimination. An AI can’t have any preconceived notions of a candidate based on gender, race, religious affiliations, etc. Having a reliable source for choosing candidates without running into any legal troubles is invaluable in itself.

AI Makes the Onboarding Process Nice and Smooth

There are a lot of tasks for a new recruit and there are many ways AI can help them through the transition into the office. Contractual paperwork tends to be one of the more overbearing prospects when taking on a new person. They are legally obligated to read it, understand it, fill in all the blanks and never lose it. AI can help organize and keep track of all of these things and more. Forms, login credentials and schedules can be all organized using AI. Again, this opens up the employer to focus on the human connections and building relationships.

AI chatbots can also help answer any questions about HR policies. This proves to be a more efficient way of expressing the company’s expectations when necessary. Often companies policies are available for all to view but are less simple to search through for answers. AI ensures that if there are any uncertainties, your new and old recruits will always have a swift means of verifying their actions.

As AI continues to develop, we will be able to take our hands off of more tasks that keep us from the face-to-face communication and other work that AI can’t do. Finding and accepting new employees is still a human process. It requires a real connection and a true understanding of what need is being filled and by whom. AI is the perfect tool to elevate this process and help the HR realm evolve.

 Source: Social Hire

If you’re interested in a career in Artificial Intelligence or Robotics call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Artificially Intelligent Robot Predicts Its Own Future by Learning Like a Baby

Artificially Intelligent Robot Predicts Its Own Future by Learning Like a Baby

For toddlers, playing with toys is not all fun-and-games—it’s an important way for them to learn how the world works. Using a similar methodology, researchers from UC Berkeley have developed a robot that, like a child, learns from scratch and experiments with objects to figure out how to best move them around. And by doing so, this robot is essentially able to see into its own future

For toddlers, playing with toys is not all fun-and-games—it’s an important way for them to learn how the world works. Using a similar methodology, researchers from UC Berkeley have developed a robot that, like a child, learns from scratch and experiments with objects to figure out how to best move them around. And by doing so, this robot is essentially able to see into its own future.

robotic learning system developed by researchers at Berkeley’s Department of Electrical Engineering and Computer Sciences visualizes the consequences of its future actions to discover ways of moving objects through time and space. Called Vestri, and using technology called visual foresight, the system can manipulate objects it’s never encountered before, and even avoid objects that might be in the way.

 

Importantly, the system learns from a tabula rasa, using unsupervised and unguided exploratory sessions to figure out how the world works. That’s an important advance because the system doesn’t require an army of programmers to code in every single possible physical contingency which, given how complicated and varied the world is, would be a hideously onerous (and even intractable) task. In future, scaled-up versions of this self-learning predictive system could make robots more adaptable in factory and residential settings, and help self-driving vehicles anticipate future events on the road.

Led by UC Berkeley assistant professor Sergey Levine, the researchers built a robot that can predict what it’ll see through a camera if it performs a certain sequence of movements. As noted, the system is not pre-programmed, and instead learns through a process called model-based reinforcement learning. It sounds fancy, but it’s similar to the way a toddler learns how to move objects around through repetition and trial-and-error. Child psychologists call this “motor babbling,” and the UC Berkeley researchers applied the same methodology and terminology to Vestri.

“In the same way that we can imagine how our actions will move the objects in our environment, this method can enable a robot to visualize how different behaviors will affect the world around it,” said Levine in a statement. “This can enable intelligent planning of highly flexible skills in complex real-world situations.”

To train the system, the researchers let the robot “play” with several objects on a small table. A form of artificial intelligence known as deep learning was applied to recurrent video prediction, allowing the bot to foresee how an image’s pixels would move from one frame to another based on its movements. In tests, the robot’s self-acquired model of the world allowed it to move objects it’s never dealt with before, and move them to desired locations (sometimes having to move the objects around obstacles).

“Children can learn about their world by playing with toys, moving them around, grasping, and so forth. Our aim with this research is to enable a robot to do the same: to learn about how the world works through autonomous interaction,” Levine said. “The capabilities of this robot are still limited, but its skills are learned entirely automatically, and allow it to predict complex physical interactions with objects that it has never seen before by building on previously observed patterns of interaction.”

As Levine notes, the system is still pretty basic, and it can only “see” a few seconds in the future. Eventually, a self-taught system like this could learn the lay-of-the-land inside a factory, and have the foresight to avoid human workers and other robots who may be in the same environment. It could also be applied to autonomous vehicles where this predictive model could, for instance, allow it to pass a slow-moving vehicle by moving into the on-coming traffic lane, or avoid a collision.

For Levine’s team, the next step will be to get the robot to perform more complex tasks, such as picking-up and placing down objects, and manipulating soft and malleable objects like cloth, rope, and fragile objects. This latest research will be presented later today at the Neural Information Processing Systems conference in Long Beach, California.

Source: Gizmodo

 

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

7 Types of Profile Pics You Should Never Post on LinkedIn, According to Recruiters

7 Types of Profile Pics You Should Never Post on LinkedIn, According to Recruiters

Recruiters spend a lot of time on LinkedIn combing through thousands of profiles looking for people who match their requirements. In order to make the process more efficient, recruiters must weed out people based on what they view.

Recruiters spend a lot of time on LinkedIn combing through thousands of profiles looking for people who match their requirements. In order to make the process more efficient, recruiters must weed out people based on what they view. There's a famous phrase by Doris Day, "People hear what they see." Recruiters (and, anyone else who looks at your profile), literally imagine what you're like based on your photo. Over time, as they talk to hundreds of candidates, recruiters naturally start to form opinions, also known as candidate bias, towards people with certain things on their profiles. Let's face it, hiring is discrimination. Recruiters must find a way to narrow down the numerous number of candidates. Which means, something as simple as your profile picture can determine if you get contacted.

If a picture is worth a thousand words, then these scream, "Don't hire me!"

I asked a large group of recruiters I know for their biggest pet peeves on candidates' LinkedIn profiles. The feedback was overwhelming. There were many things that annoy them. But, the overwhelming response was centered on profile pictures. Here are the top seven epic fails you can make on LinkedIn with your photo:

The "my puppy is the cutest" photo. Heather L. says, "I don't want to see pictures of your cats, dogs, car, etc.... I really don't need to see fun pics." Consider this: for every dog-lover out there, there's a recruiter that's a cat person. Don't ruin your chances by oversharing about your preferences.

The "I'm a woodsman" photo. Rebecca S. says, "I saw one with a cut up deer in a wheel barrel. It was AWFUL!" LinkedIn is NOT the place to try to look strong, intense, or unique. You are trying to get a job. You should look as friendly and approachable as possible.

The "I'm best man material" photo. Kendra S. says, "I had to ask a candidate to replace a picture of himself in tux holding a Heineken bottle. Had to explain Best Man title would not be applicable nor relevant for winning job." While they say everyone looks better dressed up, the tux is overkill. Better still, keep it to a headshot so your clothing (and, beer choice), isn't judged.
 

The "I'm a mystery" photo. Amber S. says, "Not smiling in the picture or doing the smirk smile." As mentioned earlier, the goal of a profile picture is to look approachable. The smirk can be interpreted as cocky, conniving, and sassy. No smile can appear too serious and anxious. Find your natural smile and let it shine through in the photo. Make sure your eyes are smiling too.

The "I'm sexy and I know it" photo. Jennifer F. says, "Inappropriate profile pics. I've seen candidate's pics from their boudoir photo shoot. This is a business networking site. If you don't have a headshot, stand in front of a blank wall in appropriate business attire, and have someone take your picture." In a time when the #MeToo movement is changing the workplace as we know it, sexy photos are a complete no-no.

The "but, it had the best light" photo. Dave T. says "I hate car selfies." and Stacy J. says, "anything too cutesy or unprofessional." Don't put up a picture just because the lighting was good. Or, you think you look adorable. This isn't a dating app.

And the worst offender? No photo at all. DeAnna T. says, "A profile with no picture." In fact, most of the recruiters agreed a lack of a photo is an immediate eliminator. Why? To them, it usually means the person either has something hide, they aren't tech-savvy, the profile is fake, or the profile has been abandoned by someone who was too lazy to care about how their professional persona looks on LinkedIn.

P.S. - It doesn't stop at the photo.

Your entire profile is being judged. The headline, summary, and work history are equally important. The right amount of text and the appropriate keywords are both critical to making a good first impression with your LinkedIn profile. Taking time to understand what a well-optimized profile looks like can dramatically increase the number of views and outreaches you get from recruiters.

Source: Inc

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Binary randomization makes large-scale vulnerability exploitation nearly impossible

Binary randomization makes large-scale vulnerability exploitation nearly impossible

One of the main reasons cyber risk continues to increase exponentially is due to the rapid expansion of attack surfaces – the places where software programs are vulnerable to attack or probe by an adversary. Attack surfaces, according to the SANS Institute, can include any part of a company’s infrastructure that exposes its networks and systems to the outside, from servers and open ports to SQLs, email authentication and even employees with “access to sensitive information.” It can also include user input via keyboard or mouse, network traffic and external hardware that is not protected by cyberhardening technology

One of the main reasons cyber risk continues to increase exponentially is due to the rapid expansion of attack surfaces – the places where software programs are vulnerable to attack or probe by an adversary. Attack surfaces, according to the SANS Institute, can include any part of a company’s infrastructure that exposes its networks and systems to the outside, from servers and open ports to SQLs, email authentication and even employees with “access to sensitive information.” It can also include user input via keyboard or mouse, network traffic and external hardware that is not protected by cyberhardening technology.

It would be easy to blame the Internet of Things (IoT) for the expanding attack surfaces, as Intel projects two billion smart devices worldwide by 2020. But in reality, the IoT is only part of the attack surface epidemic.

According to Cybersecurity Ventures, there are now 111 billion new lines of code written each year, introducing vulnerabilities both known and unknown. Not to be overlooked as a flourishing attack vector are humans, which some argue are both the most important, but also the weakest link in the cyberattack kill chain. In fact, in many cybersecurity circles there is a passionate and ongoing debate regarding just how much burden businesses should put on employees to prevent and detect cyber threats. What is not up for debate, however, is just how vulnerable humans are to intentionally or unintentionally opening the digital door for threat actors to walk in. This is most evident by the fact that 9 out of 10 cyberattacks begin with some form of email phishing targeting workers with mixed levels of cybersecurity training and awareness.

Critical Infrastructure Protection Remains a Challenge

Critical infrastructure, often powered by SCADA systems and equipment now identified as part of the Industrial Internet of Things (IIoT) is also a major contributor to attack surface expansion. Major attacks targeting these organizations occur more from memory corruption errors and buffer overflows exploits than from spear-phishing or email spoofing and tend to be the motive of nation states and cyber terrorists more so than generic hackers.

As mentioned in our last blog post, “Industrial devices are designed to have a long-life span, but that means most legacy equipment still in use was not originally built to achieve automation and connectivity.” The IIoT does provide many efficiencies and cost-savings benefits to companies in which operational integrity, confidentiality and availability are of the utmost importance, but the introduction of technology into heavy machinery and equipment that wasn’t built to communicate outside of a facility has proven challenging. The concept of IT/OT integration, which is meant to merge the physical and digital security of corporations and facilities, has failed to reduce vulnerabilities in a way that significantly reduces risk. As a result, attacks seeking to exploit critical infrastructure vulnerabilities, such as WannaCry, have become the rule and not the exception.

What if Luke Couldn’t Destroy the Death Star? 

To date, critical infrastructure cybersecurity has relied too much upon network monitoring and anomaly detection in an attempt to detect suspicious traffic before it turns problematic. The challenge with this approach is that it is reactionary and only effective after an adversary has breached some level of defenses.

We take an entirely different approach, focusing on prevention by denying malware the uniformity it needs to propagate. To do this, we use a binary randomization technique that shuffles the basic constructs of a program, known as basic blocks, to produce code that is functionally identical, but logically unique. When an attacker develops an exploit for a known vulnerability in a program, it is helpful to know where all the code is located so that they can repurpose it to do their bidding. Binary randomization renders that prior knowledge useless, as each instance of a program has code in different locations.

One way to visualize the concept of binary randomization is to picture the Star Wars universe at the time when Luke Skywalker and the Rebel Alliance set off to destroy the Death Star. The Rebel Alliance had the blueprints to the Death Star and used those blueprints to find its only weakness. Luke set off in his X-Wing and delivered a proton torpedo directly to the weak spot in the Death Star, destroying it. In this scenario, the Death Star is a vulnerable computer program, and Luke is an adversary trying to exploit said computer program.

Now imagine that the Galactic Empire built 100 Death Stars, each protected by RunSafe’s new Death Star Weakness Randomization. This protection moves the weakness to a different place on each Death Star. Now imagine you are Luke, flying full speed toward the weakness in the Death Star, chased by TIE fighters, only to find that the weakness is not where the blueprint showed. The Rebel attack fails, and the Galactic Empire celebrates by destroying another planet. Similar to the Death Star scenario above, code protected with binary randomization will still contain vulnerabilities, but an attacker’s ability to successfully exploit that vulnerability on multiple targets becomes much more difficult.

As critical infrastructure attack surfaces continue to expand, binary randomization is poised to reduce capacity of attackers to exploit vulnerabilities because each instance of the program is unique, making large-scale exploitation of a program nearly impossible – even for Luke Skywalker himself.

 

Source: IIoTworld

If you’re interested in a career in IoT call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Can Artificial Intelligence be trusted?

Can Artificial Intelligence be trusted?

AI is pretty amazing from our perspective, we use it daily in our algo analysis work of the cryptocurrency markets, and it even helped us personally identify some coins that we’d have never even looked at right before they took off (e.g. 42x on a currency over a few days a while back).

But AI could also have it’s dark sides… Elon Musk is crying himself to sleep every night thinking about how terrible it can be and we all know what happens when Skynet went online…

AI is pretty amazing from our perspective, we use it daily in our algo analysis work of the cryptocurrency markets, and it even helped us personally identify some coins that we’d have never even looked at right before they took off (e.g. 42x on a currency over a few days a while back).

But AI could also have it’s dark sides… Elon Musk is crying himself to sleep every night thinking about how terrible it can be and we all know what happens when Skynet went online…

So can you really trust AI?

I’ll tell you a little secret, you’re already trusting AI and using it every day.

I live in the US, close enough to the Canadian border to make a drive to Montreal in a few hours. I have no idea how to get to Montreal from my house, and no idea how to get around the city once I’m there, yet I managed to find my way over pretty easily and navigate to my favorite spots (PM me for a yummy poutine recommendation…)

How did i do that? I used Google Maps, an AI powered application that finds the best and fastest way for you to get from one place to another in many places around the globe.

I’m old enough to remember road trips where you’d have to take a map, figure out the different waypoints and directions and in many cases, stop at a gas station to find out where you are.

Then there was Mapquest, you’d print out directions on paper, and try and figure out if they are leading you to where you want to go or to an early grave in a ditch by the side of some backroad.

Then came GPS, and we let it tell us where to go to a point of danger and loss of life. (https://theweek.com/articles/464674/8-drivers-who-blindly-followed-gps-into-disaster)

Now we have Google maps, and the level of trust is pretty much absolute. I know some people who use it to drive to work every day, the same route, over and over again, and they would be lost without it…

Compared to the days of paper maps, this is awesome! I can now relax, not worry about getting lost or driving off a bridge and spend time enjoying the ride and time with my family and friends.

This is the scale of trust in technology, we slowly take small steps and increase our trust in it until we hand over an entire task to it and feel happy with the results.

I’m writing this today because I recently took the next step on the scale of trust with something that we all have issues with – Money.

I don’t remember the first time I bought something online, it was too long ago, but it was an historic moment for me, stepping into the world of e-Commerce and trusting my money with someone that I don’t see.

I do remember the first time I bought something on a mobile device, another personal moment of taking the leap and trusting a new technology with money. (it was a camera lens I bought on a really poorly designed website (mobile websites weren’t much of a thing back then) while eating lunch at a restaurant in Chicago). Now I was spending money outside of my comfort zone, just at a random place in the street over a wireless connection that I didn’t control.

Yesterday I took another step trusting technology with my money. I let an AI algorithm make multiple purchases of cryptocurrencies without asking me for permission or even telling me what those currencies are before hand.

This is equivalent to getting into a car with Google Maps. From this point on, I don’t have to worry about trading anymore. I have an AI to do it for me.

When I did my own trading, there was a lot of fear and stress involved, also lots of uncertainty and self doubt. Surprisingly, letting a machine make these decisions for me was a very calm and relaxing experience. It took away all the fear and emotion from trading and left me with trust. I’ve seen what this AI could do in the past, and I trust it implicitly.

So can you trust AI? So far, I’d say yes. Killer robots and time travelling governors of California? That’s for another post.

Source: Tokenai

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

 

Algorithms will out-perform Doctors in just 10 years time

Algorithms will out-perform Doctors in just 10 years time

The power of Algorithms to calculate, contemplate and anticipate the needs of patients is improving rapidly and still has no sign of slowing down. Everything from patient diagnosis to therapy selection will soon be moving at exponential rates. Does that mean the end of doctors? Not quite. To better understand technology’s ever-growing role in healthcare, we first have to better examine the potential of tools and timelines that we are working with.

The power of Algorithms to calculate, contemplate and anticipate the needs of patients is improving rapidly and still has no sign of slowing down. Everything from patient diagnosis to therapy selection will soon be moving at exponential rates. Does that mean the end of doctors? Not quite. To better understand technology’s ever-growing role in healthcare, we first have to better examine the potential of tools and timelines that we are working with. A recent study done at Beth Israel Deaconess Medical Center (BIDMC) and Harvard Medical School showed that AI isn’t about Humans versus Machines. They trained a Deep Learning Algorithm to identify Metastatic Breast Cancer by interpreting pathology images. Their algorithm reached an Algorithm Accuracy of 92.5%, where pathologists reached an accuracy of 97%. But used in combination, the detection rate approached 100 percent (approximately 99.5 percent). It is exactly this kind of collaboration between humans and machines that is going to play a vital role in the age of AI and we already have a blueprint of how a productive partnership could look.

Digitization in the next Decade

10 years is a long time, when you consider that during this period we will have access to new neurosynaptic processing power such as IBM’s TrueNorth or cloud based quantum computing. Ten years ago the iPhone got introduced which led to the development of 180.000 registered health apps, which equals 50 apps a day. Yes, a large part of them aren’t useful, but we can’t ignore the impact apps had on patients and clinicians. During the last 5 years we have seen error rates on speech and image recognition drop by over 20 percent to nearly human accuracy. So it is not a long shot to predict that, soon algorithms will over-perform humans on specific tasks such as diagnosing disease or selecting the best personalized treatment plan. We can’t ignore technology that, depending where you live, can deliver 10 to 100 times better results. A new study is published every month, which proves this potential: even today, such diagnostic algorithms have an error rate of only 5% when detecting melanoma. Among the best human specialists, it is 16%.

In medicine, error rates have not been the subject of much discussion until now. That’s not because they were not of serious importance, but because they were inevitable – to err is human.

Today’s doctors are no longer in a position to know everything that is being published – on average, 800,000 studies per year are published in more than 5,600 medical journals. What person can could ever hope to process all of that? With the current pace of advancements in AI one can easily assume that in 10 years from now algorithms will over-perform humans on 80% of today’s classified diagnosis.

I refer specifically to “today’s classified diagnosis” because I believe that the impact of precision medicine will also bring about a complete change in medicine, and we will have to re-write the textbook of medicine.

Thanks to new technologies from genome diagnostics and the application of artificial intelligence, we are able to better understand and influence the development of diseases and aging processes. This means that in the near future it will no longer be primarily a question of treating diseases, but of preventing them.

The End of Doctors?

There are many claims that new technology will eventually replace doctors. Personally, I hope they don’t. Studies have already shown that diagnostic and treatment quality is much better when human physicians and algorithms work together. Chess was one of the first areas that was taken over and subsequently dominated by machines almost 20 years ago. After Garry Kasparov, the reigning world champion at the time, lost to the IBM Computer ‘Deep Blue’ in 1997, the head to head contest between humans and machines lost much of it’s appeal. Today no human, not even the grandest of all grand masters, can beat even a mid-tier chess program running on an iPhone. After this huge symbolic victory for the machines there was doubt that humans could contribute something meaningful to the world of chess ever again. But the advent of so called ‘freestyle chess’ tournaments showed how much humans still had to offer the game of chess. These events are played by teams that can include any combination of human and machine players. The surprising insight of those tournaments is that the teams with the strongest human / machine partnership dominate even the strongest computer. Kasparov himself explained the results of a 2005 freestyle tournament like this:

The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants.

What helped the team win didn’t have anything to do with being the best chess players or having the most powerful chess computer but having the best process of collaborating with machines. It was all about the partnership and the complementary interplay between humans and machines. Humans still have a lot to offer to the game of chess if they are not racing against the machines but with the machines. To achieve the best results humans and machines have to collaborate — they have to become partners. But this requires a new set of skills and a new way of thinking on the part of humans.

We also know that in our current healthcare systems we are lacking the human element. Recent studies have shown that our social determinants count for more as 50% to our health status. From a 7 minute consultation a physician spent less then 20% for true human interaction, and is focussed on collecting clinical data, reasoning, documenting, administrating and coordinating. But one of the most important parts of care delivery, empathy and compassion have become neglected. This already starts at medical school.

Compassionate care makes a difference in how well a patient recovers from illness. In healthcare good communication and emotional support sometimes decides whether a patient lives or dies, but today there is no billing code for compassion. Ken Schwarz, the founder the Schwartz Center, believed that acts of kindness — the simple human touch from my caregivers — have made the unbearable bearable. So if a 7 minute consultation today, involves 6 minutes of activities that will be automated, can we please fight for a system that rewards compassion and other human values that are so desperately needed in the healthcare system and will hardly be replaced by robots or machines.

Future Insurance Policies

With more data from machines also comes more empowerment for patients. People will have more and more monitoring tools available, and diagnostics will become increasingly decentralized. That is why patients will take much more responsibility for themselves. Like doctors, however, this does not mean that GPs will become superfluous; they will probably communicate them more frequently online in the future. Diabetes patients, for example, can already bring their diabetes level to normal levels without any medication, simply by using monitoring tools in conjunction with online coaching from their doctor.

Thanks to precision medicine, we will soon be able to measure so-called “biomarkers” in our bodies, which will enable us to read biological processes in our bodies and from them diagnoses and prognoses, e. g. from our breath. Such tools are already available today, and thanks to them we will soon be able to detect certain types of cancer, such as lung cancer, at a very early stage. And if we detect cancer much sooner, we can treat it much earlier. That also means an insurance provider could save money for their organization and their customers by identifying diseases at a stage where treatment is less costly.

Conclusion

It is important to recognize that as technology’s role in the health sector expands as a result of increased capabilities, many things are subject to change. That does not mean, however, that the roles of humans will disappear as much as they will transform. Some researchers have started to train robots and AI systems to mimic empathy. But can we seriously believe a robot will over-perform humans when it comes to delivering bad messages? Or do you really want us to hear from a robot that I have 7 months and 5 days and 3 hours to live?

It is time we start to actively lead and design our future healthcare systems, so we also have time to redefine the value system that healthcare is based on. It’s up to all of us to define what future we want. These new value systems should focus on activities that won’t particularly be done better by machines.

 

Source: Dataeconomy

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

World's First 3D Printed Human Corneas

World's First 3D Printed Human Corneas

Scientists at Newcastle University have 3D printed the world's first human corneas.
By creating a special bio-ink using stem cells mixed together with alginate and collagen, they were able to print the cornea using a simple low-cost 3D bio-printer.
It's hoped, after further testing, that this new technique could be used to help combat the world-wide shortage of corneas for the 15 million people requiring a transplant.

The first human corneas have been 3D printed by scientists at Newcastle University.

This means that the technique could be used in the future to ensure an unlimited supply of corneas.

As the outermost layer of the human eye, the cornea has an important role in focusing vision.

Yet there is a significant shortage of corneas available to transplant, with 10 million people worldwide requiring surgery to prevent corneal blindness as a result of diseases such as trachoma, an infectious eye disorder.

In addition, almost 5 million people suffer total blindness due to corneal scarring caused by burns, lacerations, abrasion or disease.

The proof-of-concept research, published today in Experimental Eye Research, reports how stem cells (human corneal stromal cells) from a healthy donor cornea were mixed together with alginate and collagen to create a solution that could be printed, a ‘bio-ink’.

Using a simple low-cost 3D bio-printer, the bio-ink was successfully extruded in concentric circles to form the shape of a human cornea. It took less than 10 minutes to print.

The stem cells were then shown to culture – or grow.

 

Unique bio-ink

Che Connon, Professor of Tissue Engineering at Newcastle University, who led the work, said: “Many teams across the world have been chasing the ideal bio-ink to make this process feasible.

“Our unique gel - a combination of alginate and collagen - keeps the stem cells alive whilst producing a material which is stiff enough to hold its shape but soft enough to be squeezed out the nozzle of a 3D printer.

“This builds upon our previous work in which we kept cells alive for weeks at room temperature within a similar hydrogel. Now we have a ready to use bio-ink containing stem cells allowing users to start printing tissues without having to worry about growing the cells separately.”  

The scientists, including first author Ms Abigail Isaacson from the Institute of Genetic Medicine, Newcastle University, also demonstrated that they could build a cornea to match a patient’s unique specifications.

The dimensions of the printed tissue were originally taken from an actual cornea. By scanning a patient’s eye, they could use the data to rapidly print a cornea which matched the size and shape.

Professor Connon added: “Our 3D printed corneas will now have to undergo further testing and it will be several years before we could be in the position where we are using them for transplants.

“However, what we have shown is that it is feasible to print corneas using coordinates taken from a patient eye and that this approach has potential to combat the world-wide shortage.”

Significant progress

Dr Neil Ebenezer, director of research, policy and innovation at Fight for Sight, said: “We are delighted at the success of researchers at Newcastle University in developing 3D printing of corneas using human tissue. 

“This research highlights the significant progress that has been made in this area and this study is important in bringing us one step closer to reducing the need for donor corneas, which would positively impact some patients living with sight loss.

“However, it is important to note that this is still years away from potentially being available to patients and it is still vitally important that people continue to donate corneal tissue for transplant as there is a shortage within the UK. 

“A corneal transplant can give someone back the gift of sight.”

Reference: 3D Bioprinting of a Corneal Stroma Equivalent. Abigail Isaacson, Stephen Swioklo, Che J. Connon. Experimental Eye 


Source: NCL

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

10 Powerful Examples Of Artificial Intelligence In Use Today

10 Powerful Examples Of Artificial Intelligence In Use Today

The machines haven't taken over. Not yet at least. However, they are seeping their way into our lives, affecting how we live, work and entertain ourselves. From voice-powered personal assistants like Siri and Alexa, to more underlying and fundamental technologies such as behavioral algorithms, suggestive searches and autonomously-powered self-driving vehicles boasting powerful predictive capabilities, there are several examples and applications of artificial intellgience in use today.

The machines haven't taken over. Not yet at least. However, they are seeping their way into our lives, affecting how we live, work and entertain ourselves. From voice-powered personal assistants like Siri and Alexa, to more underlying and fundamental technologies such as behavioral algorithms, suggestive searches and autonomously-powered self-driving vehicles boasting powerful predictive capabilities, there are several examples and applications of artificial intellgience in use today.

However, the technology is still in its infancy. What many companies are calling A.I. today, aren't necessarily so. As a software engineer, I can claim that any piece of software has A.I. due to an algorithm that responds based on pre-defined multi-faceted input or user behavior. That isn't necessarily A.I.

A true artificially-intelligent system is one that can learn on its own. We're talking about neural networks from the likes of Google's DeepMind, which can make connections and reach meanings without relying on pre-defined behavioral algorithms. True A.I. can improve on past iterations, getting smarter and more aware, allowing it to enhance its capabilities and its knowledge.

That type of A.I., the kind that we see in wonderful stories depicted on television through the likes of HBO's powerful and moving series, Westworld, or Alex Garland's, Ex Machina, are still way off. We're not talking about that. At least not yet. Today, we're talking about the pseudo-A.I. technologies that are driving much of our voice and non-voice based interactions with the machines -- the machine-learning phase of the Digital Age.

While companies like Apple, Facebook and Tesla rollout ground-breaking updates and revolutionary changes to how we interact with machine-learning technology, many of us are still clueless on just how A.I. is being used today by businesses both big and small. How much of an effect will this technology have on our future lives and what other ways will it seep into day-to-day life? When A.I. really blossoms, how much of an improvement will it have on the current iterations of this so-called technology?

A.I. And Quantum Computing

The truth is that, whether or not true A.I. is out there or is actually a threat to our existence, there's no stopping its evolution and its rise. Humans have always fixated themselves on improving life across every spectrum, and the use of technology has become the vehicle for doing just that. And although the past 100 years have seen the most dramatic technological upheavals to life than in all of human history, the next 100 years is set to pave the way for a multi-generational leap forward.

This will be at the hands of artificial intelligence. A.I. will also become smarter, faster, more fluid and human-like thanks to the inevitable rise of quantum computing. Quantum computers will not only solve all of life's most complex problems and mysteries regarding the environment, aging, disease, war, poverty, famine, the origins of the universe and deep-space exploration, just to name a few, it'll soon power all of our A.I. systems, acting as the brains of these super-human machines.

However, quantum computers hold their own inherent risks. What happens after the first quantum computer goes online, making the rest of the world's computing obsolete? How will existing architecture be protected from the threat that these quantum computers pose? Considering that the world lacks any formidable quantum resistant cryptography (QRC), how will a country like the United States or Russia protect its assets from rogue nations or bad actors that are hellbent on using quantum computers to hack the world's most secretive and lucrative information?

In a conversation with Nigel Smart, founder of Dyadic Security and Vice President of the International Association of Cryptologic Research, a Professor of Cryptology at the University of Bristol and an ERC Advanced Grant holder, he tells me that quantum computers could still be about 5 years out. However, when the first quantum computer is built, Smart tells me that:

"...all of the world's digital security is essentially broken. The internet will not be secure, as we rely on algorithms which are broken by quantum computers to secure our connections to web sites, download emails and everything else. Even updates to phones, and downloading applications from App stores will be broken and unreliable. Banking transactions via chip-and-PIN could [also] be rendered insecure (depending on exactly how the system is implemented in each country)."

Clearly, there's no stopping a quantum computer led by a determined party without a solid QRC. While all of it is still what seems like a far way off, the future of this technology presents a Catch-22, able to solve the world's problems and likely to power all the A.I. systems on earth, but also incredibly dangerous in the wrong hands.

Applications of Artificial Intelligence In Use Today

Beyond our quantum-computing conundrum, today's so-called A.I. systems are merely advanced machine learning software with extensive behavioral algorithms that adapt themselves to our likes and dislikes. While extremely useful, these machines aren't getting smarter in the existential sense, but they are improving their skills and usefulness based on a large dataset. These are some of the most popular examples of artificial intelligence that's being used today.

#1 -- Siri

Everyone is familiar with Apple's personal assistant, Siri. She's the friendly voice-activated computer that we interact with on a daily basis. She helps us find information, gives us directions, add events to our calendars, helps us send messages and so on. Siri is a pseudo-intelligent digital personal assistant. She uses machine-learning technology to get smarter and better able to predict and understand our natural-language questions and requests.

#2 -- Alexa

Alexa's rise to become the smart home's hub, has been somewhat meteoric. When Amazon first introduced Alexa, it took much of the world by storm. However, it's usefulness and its uncanny ability to decipher speech from anywhere in the room has made it a revolutionary product that can help us scour the web for information, shop, schedule appointments, set alarms and a million other things, but also help power our smart homes and be a conduit for those that might have limited mobility.

#3 -- Tesla

If you don't own a Tesla, you have no idea what you're missing. This is quite possibly one of the best cars ever made. Not only for the fact that it's received so many accolades, but because of its predictive capabilities, self-driving features and sheer technological "coolness." Anyone that's into technology and cars needs to own a Tesla, and these vehicles are only getting smarter and smarter thanks to their over-the-air updates.

#4 -- Cogito

Originally co-founded by CEO, Joshua Feast and, Dr. Sandy Pentland, Cogito is quite possibly one of the most powerful examples of behavioral adaptation to improve the emotional intelligence of customer support representatives that exists on the market today. The company is a fusion of machine learning and behavioral science to improve the customer interaction for phone professionals. This applies to millions upon millions of voice calls that are occurring on a daily basis.

#5 -- Boxever

Boxever, co-founded by CEO, Dave O’Flanagan, is a company that leans heavily on machine learning to improve the customer's experience in the travel industry and deliver 'micro-moments,' or experiences that delight the customers along the way. It's through machine learning and the usage of A.I. that the company has dominated the playing field, helping its customers to find new ways to engage their clients in their travel journeys.

#6 -- John Paul

John Paul, a highly-esteemed luxury travel concierge company helmed by its astute founder, David Amsellem, is another powerful example of potent A.I. in the predictive algorithms for existing-client interactions, able to understand and know their desires and needs on an acute level. The company powers the concierge services for millions of customers through the world's largest companies such as VISA, Orange and Air France, and was recently acquired by Accor Hotels.

#7 -- Amazon.com

Amazon's transactional A.I. is something that's been in existence for quite some time, allowing it to make astronomical amounts of money online. With its algorithms refined more and more with each passing year, the company has gotten acutely smart at predicting just what we're interested in purchasing based on our online behavior. While Amazon plans to ship products to us before we even know we need them, it hasn't quite gotten there yet. But it's most certainly on its horizons.

#8 -- Netflix

Netflix provides highly accurate predictive technology based on customer's reactions to films. It analyzes billions of records to suggest films that you might like based on your previous reactions and choices of films. This tech is getting smarter and smarter by the year as the dataset grows. However, the tech's only drawback is that most small-labeled movies go unnoticed while big-named movies grow and balloon on the platform.

#9 -- Pandora

Pandora's A.I. is quite possibly one of the most revolutionary techs that exists out there today. They call it their musical DNA. Based on 400 musical characteristics, each song is first manually analyzed by a team of professional musicians based on this criteria, and the system has an incredible track record for recommending songs that would otherwise go unnoticed but that people inherently love.

#10 -- Nest

Most everyone is familiar with Nest, the learning thermostat that was acquired by Google in January of 2014 for $3.2 billion. The Nest learning thermostat, which, by the way, can now be voice-controlled by Alexa, uses behavioral algorithms to predictively learn from your heating and cooling needs, thus anticipating and adjusting the temperature in your home or office based on your own personal needs, and also now includes a suite of other products such as the Nest cameras.

Source: Forbes

 

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Why blockchain is not such a bad technology

Why blockchain is not such a bad technology

Before we start, it is important to remember that blockchain and Bitcoin are not the same thing. Bitcoin technology combines several technologies: money transfer principles, cryptographic principles, blockchain proper, the concept of consensus, the proof-of-work principle, peer-to-peer networking, participant motivation, Merkle trees for organizing transactions, transparency principles, hashing, and more.

Therefore, on the one hand, blockchain problems arising from the form in which it is used by Bitcoin are not universal, and it can work differently for other currencies. On the other hand, right now the market is dominated by Bitcoin-like blockchains based on proof-of-work (POW).

Before we start, it is important to remember that blockchain and Bitcoin are not the same thing. Bitcoin technology combines several technologies: money transfer principles, cryptographic principles, blockchain proper, the concept of consensus, the proof-of-work principle, peer-to-peer networking, participant motivation, Merkle trees for organizing transactions, transparency principles, hashing, and more.

Therefore, on the one hand, blockchain problems arising from the form in which it is used by Bitcoin are not universal, and it can work differently for other currencies. On the other hand, right now the market is dominated by Bitcoin-like blockchains based on proof-of-work (POW).

Problem: Blockchain is slow and inefficient

Bitcoin’s throughput is seven transactions per second, not for each participant, but for the whole network. And for Ethereum, the second-best in terms of capitalization, it is 15 simple money transfers and 3–5 smart contracts per second.

The POW principle accepted for most currencies guarantees that electricity consumption and the amount of hardware will grow until mining becomes unprofitable. However, growth of overhead costs never improves the quality of the services provided — it’s always 7 transactions per second, no matter how many miners are there and how much electricity they burn.

The Lightning Network

Experts have long been concerned about the problem of insufficient transaction speed in the Bitcoin system, and to address it, they invented the Lightning Network.

This is how it works — or, how it will work, once it is launched: First, certain network participants who need a faster transaction rate set up a separate channel — consider it a kind of private chat room — and, as a guarantee of integrity, make a deposit in the main Bitcoin network. Then they start exchanging payments separately from the rest of the network — at any speed. When the channel is no longer needed, the participants record the results of the interaction in a public blockchain and, assuming no one violated the rules, receive their deposit back.

Optimistic predictions have the Lightning Network launching as early as this year, enabling millions of transactions per second. So much for “slow.”

Problem: Blockchain is bulky

Blockchain is bulky, but that stopped being a problem after some trust was built on the network. In fact, you don’t have to download and check everything to believe the likelihood of deception is very low.

Web wallets

First of all, existing Web wallets and Web services store everything and do all of the work for you. If no one complains about a certain service, it can very well be considered reliable and somewhat trusted.

It also comes with an important advantage compared with traditional payment systems. If one Web wallet closes, you can simply switch to another one, because they have the same transaction records — blockchain is the only one. Compare that with what would happen if your regular bank encountered a glitch or went bankrupt and you needed to switch banks.

Thin wallets

Satoshi himself described another, more advanced (and more reliable) method back in 2008. Instead of storing and processing the entire 100GB blockchain, you can download and check just the block headers, as well as proof of correct transactions that are directly connected to you.

If many random network nodes that you are talking to report the block headers are exactly the same, you may rather confidently say that everything is correct.

At the moment, the headers of all existing blocks take up only 40MB, which isn’t much. But you can save even more: You don’t have to store the headers of every transaction that ever happened; you could start with a specific moment.

Problem: Blockchain is not scalable

A system’s scalability refers to its ability to improve with the addition of resources. The classic blockchain is indeed completely unscalable; adding resources does not affect the speed of transactions at all.

It’s interesting that the classic blockchain is scalable neither up nor down: If you built a small system for solving local problems based on the same principles, it would be vulnerable to a so-called 51% Attack — anyone with enough computing power could come in, immediately take over, and be able to rewrite history.

Plasma

Joseph Poon (the inventor of the Lightning Network) and Vitalik Buterin (a cofounder of Ethereum) recently proposed a new solution. They call it Plasma.

Plasma is a framework for making a blockchain of blockchains. The concept is similar to that of the Lightning Network, but it was developed for Ethereum. Here is how it works: Someone makes a deposit in the main Ethereum network and starts talking to other clients independently and separately, supervising the execution of his or her smart contract and the general rules of Ethereum on their own. A smart contract is a mini-program for working with money and Web wallets. It is the key feature of Ethereum.

From time to time, the results of these individual communications are recorded in the main network. Also, as with the Lightning Network, all participants oversee the execution of the smart contract and complain if something is not right.

So far, the proposal is just a draft, but if the concept is successfully implemented, the problem of blockchain scalability will be a thing of the past.

Problem: Miners are burning up the planet’s resources

Proof-of-work is the most popular method of reaching a consensus in the cryptocurrencies. A new block is created after lengthy calculations performed solely to prevent rewriting of the financial history. POW network miners burn a lot of electricity, and the number of megawatts wasted is regulated not by safety concerns or common sense, but rather by economics: Capacities expand as long as current cryptocurrency exchange rate keeps mining profitable.

Proof-of-stake

An alternative approach to distributing the right to create blocks is called proof-of-stake (POS). Using this concept, the likelihood of creating a block and thus the right to receive an award (in the form of interest or newly emitted currency) depends not on how much computational work you done (how much electricity you burnt), but on how much currency you have in the system.

If you own a third of all coinage, you have a one-third probability of creating a new block, thanks to a random algorithm. This principle is a good reason for participants to obey the rules, because the more of the currency you have, the more interested you are in a properly functioning network and a stable currency rate.

Proof-of-authority

A more radical method exists as well: letting only trusted participants create blocks. For example, 10 hospitals can use a blockchain to keep track of an epidemiological situation in a city. Each hospital has its own signature key as proof of authority. That makes such a blockchain private: Only hospitals can write to it. At the same time, it helps maintain openness, an important quality of the blockchain.

However, proof-of-authority is detrimental to the original blockchain concept: The network effectively becomes centralized.

Resources can be used for good

Some networks do useful work within the proof-of-work concept. They look for prime numbers of a certain type (Primecoin), calculate protein structures (FoldingCoin), or perform other scientific tasks that require a lot of calculations (GridCoin). The reward for “mining” promotes investing more resources in science.

Problem: Blockchain is decentralized and therefore is not developing

It is not very easy to introduce changes into a decentralized network protocol. The developer can either run mandatory updates for all clients — although that kind of network cannot be considered truly decentralized — or persuade all participants to accept the changes. If a significant proportion of them vote against the changes, the community may split: The blockchain will split into two alternative blockchains, and there will be two currencies. That split is called a fork.

Part of the problem is that different participants have different interests. Miners are interested in growing rewards and interest; users want to pay less for transfers; fans want the cryptocurrency to become more popular; and geeks want useful innovations to be added to the technologies.

Two of the largest cryptocurrencies have already split. It happened with Bitcoin not too long ago, when participants were unable to agree on a strategy for expanding block size. A little earlier, something similar happened with Ethereum, the result of a disagreement about if it was fair to cancel a crack on an investment fund and return the money to investors.

How can such situations be avoided?

Tezos

It is possible to encode into a cryptocurrency the ability to vote on modifications. That’s precisely what the cryptocurrency Tezos, which is about to go on the market, did. Primary voting characteristics are as follows:

  1. The more cryptocurrency you hold, the more voting power you have. Mining power is irrelevant.
  2. A vote may be delegated to someone who understands the subject of the current vote better than you do.
  3. Developers are entitled to a veto for one year after launch, and if necessary veto power can be extended.
  4. The initial quorum will be 80%, but that can be changed to conform to actual user activity.

It’s thought this approach will significantly reduce the emotional level and the necessity for hard forks.

When voting on these principles, at some point the majority could well eliminate the minority’s voting rights. In short, the rich may take over. However, Tezos’s developers think that such a takeover would have a negative impact on the value of the currency and therefore is unlikely. We’ll see.

Problem: Blockchain is too transparent

Imagine you’re WikiLeaks and you get donations in bitcoins. Everyone knows your address and how much you have, and when you try to convert your money into dollars in the exchange, then law enforcement will know how much you have in dollars.

You can’t launder your money in Bitcoin. Dividing up the money into 10 wallets only means having 10 accounts associated with you. There are services called mixers or tumblers that move around large sums of money for a fee, to obscure the real owner, but they are inconvenient for a number of reasons.

CoinJoin in Dash

The creators of the cryptocurrency Dash (the former Darkcoin) were the first to try solving the anonymity problem, by using the PrivateSend function. Their approach was simple: They designed a tumbler right into the currency.

There were a few problems. First, if someone (e.g., law enforcement) controls a significant number of the nodes that mix “clean” money with “dirty,” they can observe the transfer. Perhaps an unlikely scenario, but still quite possible.

Second, mixing dirty money with clean makes all of that money look a bit dirty — or “gray.” But for gray money to appear clean, all participants have to use mixing all the time.

CryptoNote in Monero

A more reliable approach was invented: a truly anonymous currency called Monero.

First, Monero uses electronic signatures that permit a group participant (designated by the cell) to sign a message on behalf of the group and also prevents anyone from ascertaining who signed it. This ability permits the sender to hide their own traces. At the same time, the protocol prevents double spending.

Second, Monero uses not only a private key for money transfers, but also an additional private key to see what has arrived in your wallet, making it impossible to see someone else’s transaction history.

Third, some senders may want to generate one-time wallets to keep money that is private and funds coming in from the markets separate. (This recommendation was made long ago over at Bitcoin.)

Conclusion

Our short overview of issues that some talented people have turned to their benefit has come to a close. We could’ve written much more about smart contracts at Ethereum, the bright future of Ripple, or cryptocurrencies without blockchain such as IOTA.

Strictly speaking, the title of this article is inaccurate. We discussed blockchain’s add-ons, not blockchain itself. But that’s the beauty of blockchain: It inspires people to look for ways to improve it

 

Source: Kaspersky

 

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

It's Our Birthday!

It's Our Birthday!

12 years ago, on August 1st 2006, Hanson Regan was born. An initiative by brothers John and Gabriel Kelly, they created the company, whose birthday we celebrate today.

12 years ago, on August 1st 2006, Hanson Regan was born. An initiative by brothers John and Gabriel Kelly, they created the company, whose birthday we celebrate today.

(John & Gabriel 2006)

Over the years Hanson Regan has grown steadily and organically, establishing an
exceptional reputation in the market-place.

"The twelve years have flown by, it became a joint vision really, Gabriel had his and I mine, coming to the same point, in some ways influenced by forces outside us. We didn’t know it but we had to wait until the right conditions presented themselves." John Kelly remarked. 

From the very beginning John and Gabriel set our four core values of Grow, Honest, Professional and Relationships, which are as relevant to us today as they were 12 years ago.

Our people are at the heart of Hanson Regan, we are powerful listeners and by combining our clients vision with our passion for rapid resourcing we are able to make what our clients do really successful. Dedicated account management, full support suites, candidate referencing and rapid precise resourcing are just a few of the ways In which our team at Hanson Regan provides excellence.

"We’d always hoped to be able to help large, multinational clients source perfect contractors enabling them to run their businesses effectively. Now we are, thanks to the great people we have at Hanson Regan, and the processes we adopt to provide the best candidates. Like our clients we support we are continuously improving, growing stronger and building some super honest partnerships: Here’s to the next 12." - John Kelly

(Hanson Regan Team 2018)

Today we are celebrating all the people in our company that make us great! So thank you to each and everyone of you, keep up the fantastic work and enjoy the celebratory breakfast. You all earned it!! 

If you're looking to make a career change, or to wish us a Happy Birthday, you can call us on +44 0208 290 4656 or drop us an email info@hansonregan.com

 

Most of AI’s Business Uses Will Be in Two Areas

Most of AI’s Business Uses Will Be in Two Areas

While overall adoption of artificial intelligence remains low among businesses (about 20% upon our last study), senior executives know that AI isn’t just hype. Organizations across sectors are looking closely at the technology to see what it can do for their business. As they should—we estimate that 40% of all the potential value that can created by analytics today comes from the AI techniques that fall under the umbrella “deep learning,” (which utilize multiple layers of artificial neural networks, so-called because their structure and function are loosely inspired by that of the human brain). In total, we estimate deep learning could account for between $3.5 trillion and $5.8 trillion in annual value.

While overall adoption of artificial intelligence remains low among businesses (about 20% upon our last study), senior executives know that AI isn’t just hype. Organizations across sectors are looking closely at the technology to see what it can do for their business. As they should—we estimate that 40% of all the potential value that can created by analytics today comes from the AI techniques that fall under the umbrella “deep learning,” (which utilize multiple layers of artificial neural networks, so-called because their structure and function are loosely inspired by that of the human brain). In total, we estimate deep learning could account for between $3.5 trillion and $5.8 trillion in annual value.

However, many business leaders are still not exactly sure where they should apply AI to reap the biggest rewards. After all, embedding AI across the business requires significant investment in talent and upgrades to the tech stack as well as sweeping change initiatives to ensure AI drives meaningful value, whether it be through powering better decision-making or enhancing consumer-facing applications.

Through an in-depth examination of more than 400 actual AI use cases across 19 industries and nine business functions, we’ve discovered an old adage proves most useful in answering the question of where to put AI to work, and that is: “Follow the money.”

The business areas that traditionally provide the most value to companies tend to be the areas where AI can have the biggest impact. In retail organizations, for example, marketing and sales has often provided significant value. Our research shows that using AI on customer data to personalize promotions can lead to a 1-2% increase in incremental sales for brick-and-mortar retailers alone. In advanced manufacturing, by contrast, operations often drive the most value. Here, AI can enable forecasting based on underlying causal drivers of demand rather than prior outcomes, improving forecasting accuracy by 10-20%. This translates into a potential 5% reduction in inventory costs and revenue increases of 2-3%.

While applications of AI cover a full range of functional areas, it is in fact in these two cross-cutting ones—supply-chain management/manufacturing and marketing and sales—where we believe AI can have the biggest impact, at least for now, in several industries. Combined, we estimate that these use cases make up more than two-thirds of the entire AI opportunity. AI can create $1.4-$2.6 trillion of value in marketing and sales across the world’s businesses and $1.2-$2 trillion in supply chain management and manufacturing (some of the value accrues to companies while some is captured by customers). In manufacturing, the greatest value from AI can be created by using it for predictive maintenance (about $0.5-$0.7 trillion across the world’s businesses). AI’s ability to process massive amounts of data including audio and video means it can quickly identify anomalies to prevent breakdowns, whether that be an odd sound in an aircraft engine or a malfunction on an assembly line detected by a sensor.

Another way business leaders can home in on where to apply AI is to simply look at the functions that are already taking advantage of traditional analytics techniques. We found that the greatest potential for AI to create value is in use cases where neural network techniques could either provide higher performance than established analytical techniques or generate additional insights and applications. This is true for 69% of the AI use cases identified in our study. In only 16% of use cases did we find a “greenfield” AI solution that was applicable where other analytics methods would not be effective. (While the number of use cases for deep learning will likely increase rapidly as algorithms become more versatile and the type and volume of data needed to make them viable become more available, the percentage of greenfield deep learning use cases might not increase significantly because more established machine learning techniques also have room to become better and more ubiquitous.)

We don’t want to come across as naïve cheerleaders. Even as we see economic potential in the use of AI techniques, we recognize the tangible obstacles and limitations to implementing AI.  Obtaining data sets that are sufficiently large and comprehensive enough to feed the voracious appetite that deep learning has for training data is a major challenge. So, too, is addressing the mounting concerns around the use of such data, including security, privacy, and the potential for passing human biases onto AI algorithms. In some sectors, such as health care and insurance, companies must also find ways to make the results explainable to regulators in human terms: why did the machine come up with this answer? The good news is that the technologies themselves are advancing and starting to address some of these limitations.

Beyond these limitations, there are the arguably more difficult organizational challenges companies face as they adopt AI. Mastering the technology requires new levels of expertise, and process can become a major impediment to successful adoption. Companies will have to develop robust data maintenance and governance processes, and focus on both the “first mile”—how to acquire data and organize data efforts—and the far more difficult “last mile,” how to integrate the output of AI models into work flows, ranging from those of clinical trial managers and sales force managers to procurement officers.

While businesses must remain vigilant and responsible as they deploy AI, the scale and beneficial impact of the technology on businesses, consumers, and society make pursuing AI opportunities worth a thorough investigation. The pursuit isn’t a simple prospect but it can be initiated by evoking a simple concept: follow the money.

 

Source: HBR

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

What’s next for Tech innovation in 2018?

What’s next for Tech innovation in 2018?

One way that Samsung Electronics works with the technology startup community is through Samsung NEXT – an innovation arm that scouts, supports and invests in forward-thinking new software and services businesses and entrepreneurs. By rubbing shoulders with those on the frontline of software innovation, as well as harnessing the insights of its homegrown experts, Samsung is always thinking about how technology, and indeed society, will change. We spoke with members of the Samsung NEXT team—here are the top five technologies that will change people’s lifestyle in 2018.

One way that Samsung Electronics works with the technology startup community is through Samsung NEXT – an innovation arm that scouts, supports and invests in forward-thinking new software and services businesses and entrepreneurs. By rubbing shoulders with those on the frontline of software innovation, as well as harnessing the insights of its homegrown experts, Samsung is always thinking about how technology, and indeed society, will change. We spoke with members of the Samsung NEXT team—here are the top five technologies that will change people’s lifestyle in 2018.

 

 

1. Faster, more transparent machine learning

Artificial intelligence (AI) will dramatically expand within the next 12 months. It is already changing the way people interact with a number of applications, platforms and services across both consumer and enterprise environments.

 

In the next couple of years, there will be new approaches on two fronts. Firstly, less data will be required to train an algorithm. This means an image recognition system that currently needs 100,000 images to learn how to operate will only need a small fraction of that number. This will make it easier to quickly implement powerful machine learning systems.

 

Secondly, the technology will become more transparent. Advances in technology will mean researchers will be able to open the black box of AI and more clearly explain why a particular model made the decision it did. Currently, a lot of academia and start-ups are putting much effort into understanding how a machine makes decisions, how the models are learning from the data and what are the parameters of data that influence the models.

 

 

Scott Phoenix, The CEO of Vicarious, makes a presentation about human-level intelligent robots at the Samsung CEO Summit last October in San Francisco. (source: www.vicarious.com)

 

Samsung plans to build an AI platform under a common architecture that will provide the deepest understanding of usage context and behaviors. This is one of the core strategies to make the user-centric AI ecosystem. Samsung NEXT has also invested in various companies innovating in the field, including Vicarious, a company developing neuroscience-based artificial general intelligence (AGI) for robots for simpler deployment with faster training and Bonsai, which develops an AI platform that empowers enterprises to create, deploy and manage AI models, and FloydHub, a start-up that has developed a cloud service for machine learning.

 

 

2. New AR and VR form factors and viewing models

Both augmented reality (AR) and virtual reality (VR) are increasingly being relied upon to create more immersive worlds where technology enables users to get more hands-on with virtual overlays and environments. In the case of AR, devices won’t remove us from our world, but will rather enable us to have objects appear as if they were really there.

 

2018 will witness more developers embracing AR, starting to make interesting applications moving beyond the world of gaming. One such example is a furniture company planning to make its full catalogue available in AR. Samsung NEXT has invested in companies like 8i, which provides a platform that enables true 3D (fully volumetric) video capture of people, allowing viewers to walk around as real humans in VR and AR.

 

8i’s Holo augmented reality application enables digital recreation of people and characters to be seen in the real world through a smartphone camera. (source: www.8i.com)

 

Head Mounted Displays (HMDs) will see foundational technology improvements in the quality of their displays, sensors, and materials. In 2018, there will be a lot of excitement in the industry in the form of M&A and investment activities. “For VR, we will see more standalone devices, falling between existing HMDs powered by mobile phones, and high-end hardware connected to powerful PCs. This will enable more people to experience the technology in new ways,” said Ajay Singh, Samsung NEXT Ventures.

 

 

3. Blockchain to look beyond cryptocurrencies

“In 2017, we saw blockchain technology increasingly applied to develop unbanked countries and communities,” said Raymond Liao from Ventures. “With underpinnings in peer-to-peer transactions, blockchain has the power to democratize transactions by removing the middleman and reducing the needless fees that so frequently hamstring those deprived of banking services.”

 

Cryptocurrency is the dominant killer application for blockchain up to now. However, we will see blockchain entrepreneurs and decentralization idealists, freshly financed by token sales, marching to either empower consumers against the one-sided data monetization paradigm, or break up enterprise data silos in, e.g., supply chain and healthcare industries.

Samsung’s focus on security will be an advantage for the company as far as blockchain is concerned. The elephant in the room around blockchain is that the entire technology is only as secure as the users’ keys. Samsung’s technology enables enterprise customers to be assured of a certain level of security in how their employees interact with their blockchain-based apps. Furthermore, Samsung NEXT includes in its portfolio companies like HYPR that provides enterprise with enhanced security and user experience using blockchain and Filament that secures Internet of Things (IoT) devices with their blockchain protocol.

 

 

4. IoT to put power in the hands of healthcare patients

Healthcare is an industry that is ripe for disruption. We will begin to see the power of IoT in healthcare with the emergence of inexpensive, continuous ways to capture and share our data, as well as derive insights that inform and empower patients. Moreover, wearable adoption will create a massive stream of real-time health data beyond the doctor’s office, which will significantly improve diagnosis, compliance and treatment. In short, a person’s trip to the doctor will start to look different – but for the right reasons.

 

Samsung is using IoT and AI to improve efficiency in healthcare. Samsung NEXT has invested in startups in this area, such as Glooko which helps people with diabetes by uploading the patient’s glucose data to the cloud to make it easier to access and analyse them. Another noteworthy investment in this space from Samsung NEXT is HealthifyMe, an Indian company whose mobile app connects AI-enabled human coaches with people seeking diet and exercise advice.

Samsung is uniquely positioned among tech companies in that it already has a significant business in healthcare. The company has solutions in wearables, hospital screens and tablets, and X-ray and MRI machines. By tying all these solutions together and cooperating with other partners, it will enable patients to manage their health from their own devices.

 

 

5. IoT breaks free from homes and enters the city

In the next couple of years, one should expect to see IoT transform urban environments thanks to the combination of learnings from smart homes and buildings, and the proliferation of 5G. Transformation will happen in waves, starting with innovation that requires fewer regulations. It is expected to impact the daily life of the community in meaningful ways, such as parking solutions, mapping, and bike share schemes.

 

Samsung NEXT already has various IoT investments including Stae for data-driven urban planning, and Swiftly that provides enterprise software to help transit agencies and cities improve urban mobility.

 

The company has its own IoT platform SmartThings—an acquisition that came through the Samsung NEXT team. The platform is connected to ARTIK for enterprises and HARMAN Ignite’s connected car platform, creating a comprehensive IoT ecosystem. Based on its progress on IoT, Samsung showcased its vision for ‘Samsung City 2020’ at this year’s CES, which is on its way to realization.

Source: Samsung

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

The Impact on Jobs and Training from AI, AR and VR

The Impact on Jobs and Training from AI, AR and VR

Artificial intelligence, augmented reality and virtual reality are here to stay, but what impact will they have on jobs and training?

Artificial intelligence, augmented reality and virtual reality are here to stay, but what impact will they have on jobs and training?

A new study by Pew Research Center and Elon University’s Imagining the Internet Center asked more than 1,400 technologists, futurists and scholars whether well-prepared workers be able to keep up in the race with artificial intelligence tools, and what impact this development will have on market capitalism.

According to Elon University, most of the experts said they hope to see education and jobs-training ecosystems shift in the next decade to exploit liberal arts-based critical-thinking-driven curriculums; online courses and training amped up by artificial intelligence, augmented reality and virtual reality; and scaled-up apprenticeships and job mentoring.

However, some expressed fears that education will not meet new challenges or — even if it does — businesses will implement algorithm-driven solutions to replace people in many millions of jobs, leading to a widening of economic divides and capitalism undermining itself.

An analysis of the overall responses uncovered five key themes:

  1. The training ecosystem will evolve, with a mix of innovation in all education formats. For instance, more learning systems will migrate online and workers will be expected to learn continuously. Online courses will get a big boost from advances in augmented reality, virtual reality and artificial intelligence.

  2. Learners must cultivate 21st century skills, capabilities and attributes such as adaptability and critical thinking.

  3. New credentialing systems will arise as self-directed learning expands.

  4. Training and learning systems will not be up to the task of adapting to train or retrain people for the skills that will be most prized in the future.

  5. Technological forces will fundamentally change work and the economic landscape, with millions more people and millions fewer jobs in the future, raising questions about the future of capitalism.

“The vast majority of these experts wrestled with a foundational question: What is special about human beings that cannot be overtaken by robots and artificial intelligence?” said Lee Rainie, director of internet, science and technology research at Pew Research Center and co-author of the report. “They were focused on things like creativity, social and emotional intelligence, critical thinking, teamwork and the special attributes tied to leadership. Many made the case that the best educational programmes of the future will teach people how to be lifelong learners, on the assumption that no job requirements today are fixed and stable.”

Among the skills, capabilities and attributes the experts predicted will be of most future value were: adaptability, resilience, empathy, compassion, judgement and discernment, deliberation, conflict resolution, and the capacity to motivate, mobilise and innovate.

Jeff Jarvis, a professor at the City University of New York Graduate School of Journalism, highlighted the need for schools to take a new approach to educate the workforce of the future: “Schools today turn out widget makers who can make widgets all the same. They are built on producing single right answers rather than creative solutions. They are built on an outmoded attention economy: Pay us for 45 hours of your attention and we will certify your knowledge. I believe that many — not all — areas of instruction should shift to competency-based education in which the outcomes needed are made clear and students are given multiple paths to achieve those outcomes, and they are certified not based on tests and grades but instead on portfolios of their work demonstrating their knowledge.”

Tiffany Shlain, filmmaker and founder of the Webby Awards, added: “The skills needed to succeed in today’s world and the future are curiosity, creativity, taking initiative, multi-disciplinary thinking and empathy. These skills, interestingly, are the skills specific to human beings that machines and robots cannot do, and you can be taught to strengthen these skills through education.”


Source: Smartcities

 

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Built Robotics Self Driving Bulldozer

Built Robotics Self Driving Bulldozer

This is a self driving bulldozer, It's created by Built Robotics and it can dig holes by itself based on the specific location coordinates you send from the app.

This is a self driving bulldozer, It's created by Built Robotics and it can dig holes by itself based on the specific location coordinates you send from the app.


Built Robotics was founded with a question — What will construction look like in a generation? And what solutions can we develop to address a chronic labor shortage, productivity that has fallen by half since the 1960s, and an industry that, despite significant improvements, remains the most dangerous in America? These are tough questions, and it’s impossible to know the answers today. But we kept coming back to one realization: we need a new way to build.

"With that mission in mind, we came up with a simple idea. Let’s take the latest sensors from self-driving cars, retrofit them into proven equipment from the job site, and develop a suite of autonomous software designed specifically for the requirements of construction and earthmoving. And over the last two years, with a team of talented engineers, roboticists, and construction experts, that’s what we’ve done. It hasn’t been easy—in fact, no one has ever done what we’re doing—but with over $100 billion in earthmoving and grading services performed in the US each year, it feels like we’re onto something."

Source: BuiltRobotics

If you’re interested in a career in Artificial Intelligence or Robotics call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

In an AI-powered world, what are potential jobs of the future?

In an AI-powered world, what are potential jobs of the future?

With virtual assistants answering our emails and robots replacing humans on manufacturing assembly lines, mass unemployment due to widespread automation seems imminent. But it is easy to forget amid our growing unease that these systems are not “all-knowing” and fully competent.

As many of us have observed in our interactions with artificial intelligence, these systems perform repetitive, narrowly defined tasks very well but are quickly stymied when asked to go off script — often to great comical effect. As technological advances eliminate historic roles, previously unimaginable jobs will arise in the new economic reality. We combine these two ideas to map out potential new jobs that may arise in the highly automated economy of 2030.

With virtual assistants answering our emails and robots replacing humans on manufacturing assembly lines, mass unemployment due to widespread automation seems imminent. But it is easy to forget amid our growing unease that these systems are not “all-knowing” and fully competent.

As many of us have observed in our interactions with artificial intelligence, these systems perform repetitive, narrowly defined tasks very well but are quickly stymied when asked to go off script — often to great comical effect. As technological advances eliminate historic roles, previously unimaginable jobs will arise in the new economic reality. We combine these two ideas to map out potential new jobs that may arise in the highly automated economy of 2030.

Training, supervising and assisting robots

As robots take on increasingly complex functions, more humans will be needed to teach robots how to correctly accomplish these jobs. Human Intelligence Task (HIT) marketplaces like MTurk and Crowdflower already use humans to train AI to recognize objects in images or videos. New AI companies, like Lola, a personal travel service, are expanding HIT with specialized workers to train AI for complex tasks. 

Microsoft’s Tay bot, which quickly devolved into tweeting offensive and obscene comments after interacting with users on the internet, caused significant embarrassment to its creators. Given how quickly Tay went off the rails, it is easy to imagine how dangerous a bot trusted with maintaining our physical safety can become if it is fed the wrong sets of information or learns the wrong things from a poorly designed training set. Because the real world is ever-changing, AI must continuously train and improve, even after it achieves workable domain expertise, which ensures that expert human supervision is critical

Integrating jobs for people into the design of semi-autonomous systems has enabled some companies to achieve greater performance despite current technological limitations.

BestMile, a driverless vehicle deployed to transport luggage at airports, has successfully integrated human supervision into its design. Instead of engineering for every edge case in the complex and dangerous environment of an airport tarmac, the BestMile vehicle stops when it senses an obstacle in its path and waits for its human controller to decide what to do, enabling the company to enter the market much more quickly than competitors, which must refine their sensing algorithms to allow their robots to independently operate without incident.

Frontier explorers: Outward and upward

When Mars One, a Dutch startup whose goal is to send people to Mars, called for four volunteers to man their first Mars mission, more than 200,000 people applied.

Regardless of whether automation leads to increased poverty, automation’s threat of displacing people from their current jobs and in essence some part of their sense of self-worth could drive many to turn to an exploration of our final frontiers. An old saying jokes that there are more astronauts from Ohio than any other state because there is something about the state that makes people want to leave this planet.

One risk to human involvement in exploration is that exploration itself is also already being automated. Recently, relatively few of our space exploration missions have been manned. Humans have never left Earth’s orbit; all our exploration of other planets and the outer solar systems has been through unmanned probes. 

Artificial personality designers

As AI creeps into our world, we’ll start building more intimate relationships with it, and the technology will need to get to know us better, but some AI personalities may not suit some people. Moreover, different brands may want to be represented by distinct and well-defined personalities. The effective human-facing AI designer will, therefore, need to be mindful of subtle differences within AI to make AI interactions enjoyable and productive. This is where the Personality Designer or Personality Scientist comes in.

While Siri can tell a joke or two, humans crave more, so we will have to train our devices to provide for our emotional needs. In order to create a stellar user experience, AI personality designers or scientists are essential — to research and to build meaningful frameworks with which to design AI personalities. These people will be responsible for studying and preserving brand and culture, then injecting that information meaningfully into the things we love, like our cars, media, and electronics.

Chatbot builders are also hiring writers to write lines of dialogue and scripts to inject personality into their bots. Cortana, Microsoft’s chatbot, employs an editorial team of 22. Creative agencies specializing in writing these scripts have also found success in the last year.

Startups like Affectiva and Beyond Verbal are building technology that assists in recognizing and analyzing emotions, enabling AI to react and adjust its interactions with us to make the experience more enjoyable or efficient. A team from the Massachusetts Institute of Technology and Boston University is teaching robots to read human brain signals to determine when they have committed a fault without active human correction and monitoring. Google has also recently filed patents for robot personalities and has designed a system to store and distribute personalities to robots.

Human-as-a-Service

As automated systems become better at doing most jobs humans perform today, the jobs that remain monopolized by humans will be defined by one important characteristic: the fact that a human is doing them. Of these jobs, social interaction is one area where humans may continue to desire specifically the intangible, instinctive difference that only interactions and friendships with other real humans provide.

We are already seeing profound shifts toward “human-centric” jobs in markets that have experienced significant automation. A recent Deloitte analysis of the British workforce over the last two decades found massive growth in “caring” jobs: the number of nursing assistants increased by 909% and care workers by 168%.

The positive health effects of touch have been well documented and may provide valuable psychological boosts to users, patients, or clients. In San Francisco, companies are even offering professional cuddling services. Whereas today such services are stigmatized, “affection as a service” may one day be viewed on par with cognitive behavioral therapy or other treatments for mental health.

Likewise, friendship is a task that automated systems will not be able to fully fill. Certain activities that are generally combined with some level of social interaction, like eating a meal, are already seeing a trend towards “paid friends.” Thousands of Internet viewers are already paying to watch mukbang, or live video streams of people eating meals, a practice which originated in Korea to remedy the feeling of living alone. In the future, it is possible to imagine people whose entire jobs are to eat meals and engage in polite conversation with clients.

More practical social jobs in an automated economy may include professional networkers. Just as people have not trusted online services fully, it is likely that people will not trust more advanced matching algorithms and may defer to professional human networkers who can properly arrange introductions to the right people to help us reach our goals. Despite the proliferation of startup investing platforms, for example, we continue to see startups and VC firms engage placement agents in order to successfully fundraise.

Despite many claims to the contrary, designing a fully autonomous system is incredibly complex and remains far out of reach. For now, training a human is still much cheaper than developing robot replacement.

 

Source: Readwrite

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Blue Prism World London 2018

Blue Prism World London 2018

At Hanson Regan we champion innovation, and so we're always on the lookout for exciting new ways to maximise efficiency and output. We were therefore delighted to attend this year's most impactful Robotic Process Automation (RPA) event: Blue Prism World London 2018.

At Hanson Regan we champion innovation, and so we're always on the lookout for exciting new ways to maximise efficiency and output. We were therefore delighted to attend this year's most impactful Robotic Process Automation (RPA) event: Blue Prism World London 2018.

 

It's no secret that RPA is big news for companies looking to automate time consuming processes. For those unfamiliar with the term, RPA is a burgeoning technology that lets software robots replicate the actions of human workers for routine tasks such as data entry, altering the way organizations handle many of their key business and IT processes.

 

When RPA is used in conjunction with cognitive technologies, its capabilities can be expanded even further, extending automation to processes that would otherwise require judgement or perception. Thanks to natural language processing and ever more sophisticated chatbot technology and speech recognition, bots can now extract and structure speech audio, text or image, before passing this structured information to the next step of the process.

 

Industry Leading Automation

 

Blue Prism World offered attendees an opportunity to learn from and share with RPA industry leaders, practitioners, analysts and experts about real-world benefits and applications of RPA and Intelligent Automation.

 

With visionary keynote speakers such as Lynda Gratton, Professor of Management Practice at London Business School, Dario Zancetta, Head of Digital Technology and Innovation at Bank Of Ireland, and Vincent Vloemans from Global IT at Heineken, Blue Prism World presented a wealth of knowledge and a fascinating insight into the growing world of RPA and its potential to transform the way companies work.

 

The event was a fantastic opportunity to interact and network with people who are fascinated in the future of digitalization, and what that means for the future of work. From listening to Robert Kesterton, the Senior Manager of Business Improvement at Jaguar Land Rover, we were able to hear how utilising robots helped his company to deliver a better outcome, not only in terms of cost, but also in functionality. The use of robots helped save Jaguar Land Rover over 3000 hours worth of work and £0.5m of investment they didn't need to spend in their enterprise system, while delivering over £1.5m of revenue generation for the organisation in the process – a fantastic example of RPA technology delivering transformative innovation and efficiency.

 

We were particularly taken aback by the diversity of industries in attendance. From education and learning to finance and banking, all of the organisation present were doing different things, but they were all using robots to do them, highlighting the versatility of the technology on offer.

 

The Future Of Work

 

In her opening keynote presentation, Lynda Gratton explored the way that jobs overall are changing with the development of ever more sophisticated and employable technology. She suggested that this technology is at the heart of the future of the world of work, but that there is uncertainty as to what this will mean. Some argue that RPA will inevitably result in mass unemployment, while others envision a more positive future full of job creation and possibilities. The truth, according to Lynda, is that there will be as many jobs created as there will be destroyed.

 

At the same time, the jobs created won’t be the same as those that have come before them. Every single person, client and company you are advising will see their jobs transformed. And, in order to facilitate this transformation successfully, you have to retrain and re-skill your workforce and fundamentally change the context of work to encourage them to do that. Lynda reminded attendees that while automation frees people up to be more productive, it also frees them up to be more themselves.

 

Leading on from this, Lynda advocated for the promotion of women in work. Looking out across the audience, she highlighted how few women were in attendance, and this is reflective of the industry, but something that must change. She urged businesses to do all they can to encourage young women to take up the exciting and future-proof roles that were on offer.

 

Lifelong Learning

 

While we’re all focused on the parts of jobs that we are taking away from our employees, it’s vital to be just as invested in the parts that will take their place, otherwise our workforces can become anxious about the aspects of their role they are losing . Taking work off of people to allow them to do human work that requires them to be empathetic and creative can paradoxically make them worried and therefore less able to make empathetic choices.

 

In order to allay this anxiety, we must be clear that lifelong leaning is to be at the heart of everything we do. By replacing what you’re taking away from your employees with learning, you help them grow professionally and, crucially, help them to fulfil their potential as humans.

Implementation

 

Our time at Blue Prism World only further highlighted how efficient and accurate implementation of RPA technology is the key to its success. Here at Hanson Regan, we utilise RPA in our vetting systems, ensuring our candidates are up to scratch from the very start. By only providing us with proven candidates who can get the job done, there's less chance of hiccups and, therefore, saves us in unnecessary spending.

 

RPA: Replacing Humans?

 

While RPA presents fantastic opportunities for organisations across the board, due to common misconceptions and misuse of terms like 'Artificial Intelligence', there is still widespread wariness of utilising robots.

 

During Leslie Willcock's closing keynote presentation, the Professor of Technology Work and Globalisation, highlighted that organisations often under-perform, under-fund, under-resource but, crucially, under-aspire with their robotic process automation and cognitive automation objectives. This can be due to a number of factors, but usually when RPA myths are perpetuated, and misconceptions are accepted as truth, companies become wary and distrustful of the technology on offer.

 

Common RPA myths include:

 

  • RPA is only used to replace humans with technology, leading to layoffs
  • Business operations staff feel threatened by RPA
  • RPA replaces the IT department
  • RPA is driven only by cost savings
  • All RPA supplier tools scale easily and are enterprise-friendly
  • It's all about the technology and the software
  • RPA is being replaced by Cognitive Automation and AI

 

Dispel these myths, however, and Cognitive RPA has the potential to go beyond basic automation to provide business outcomes such as enhanced customer satisfaction, lower churn, and increased revenues.

 

Look out for future blog posts as delve deeper into our time at Blue Prism World, and what RPA can mean for your business.

If you’re interested in a career in Artificial Intelligence or Robotics call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Detroit: Become Human a window into the future?

Detroit: Become Human a window into the future?

DETROIT: Become Human is the latest high profile exclusive to come to the PS4, which portrays a futuristic world where human-like robots walk among us.

Quantic Dream’s new piece of interactive entertainment, like with Heavy Rain and Beyond, is full of gut wrenching decisions and multiple branching paths.

Set in the near future of 2038, it explores a world where human-like androids live among us and what it means to be human.

 

But the story, which evokes the civil rights movements in the androids’ struggle, may be closer to the reality we face in a few decades time than you’d think.

Dr David Hanson, creator of the world’s most advanced android Sophia, believes that by 2045 robots will share the same civil rights as humans.

The robotics expert made the comments in a brand new research paper titled 'Entering The Age of Living Intelligent Systems and Android Society'.

Dr Hanson believes that by 2029 android AI will match the intelligence of a one-year old human.

This will open the door for androids to assume menial positions in the military and emergency services just two years later in 2031.

And he feels by 2035 “androids will surpass nearly everything that humans can do”.

Dr Hanson expects a new generation of androids will be able to pass University exams, earn PHD’s and function with the intelligence levels of an 18 year old human.

He believes that these advanced machines could even go on to start a ‘Global Robotic Civil Rights Movement’.

The movement itself is expected to happen in 2038 and will be used to question the ethical treatment of AI machines within human society.

Dr Hanson’s research paper was commissioned alongside the release of Detroit: Become Human on PS4.

He said: “As depicted in Detroit: Become Human, lawmakers and corporations in the near future will attempt legal and ethical suppression of machine emotional maturity, so that people can feel safe.”

“Meanwhile artificial intelligence won't hold still. As people's demands for more generally intelligent machines push the complexity of AI forward, there will come a tipping point where robots will awaken and insist on their rights to exist, to live free.”

While Adam Williams, lead writer of Detroit: Become Human, added: “Detroit: Become Human is a work of fiction but Dr. Hanson’s research shows that life may soon imitate art.”

“His predictions are alarmingly close to the world depicted in the game. As the technology evolves, civil rights should be a natural consideration as androids become more prevalent in our society. I for one cannot wait to see how it plays out.”

Source: Express

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

SAP brings blockchain to the mainstream with their new supply chain initiative

SAP brings blockchain to the mainstream with their new supply chain initiative

Blockchain is one of the most recent technological buzzwords; following terms such as AI, machine learning and IoT.

At this stage, almost every business is intrigued by the concept. Some are even beginning to look for the means to begin their own adoption process.

IoT is now present in almost every newly made security camera on the planet. Machine Learning is being used to stand for customer service agents in the form of chatbots. AI is even being used by cybersecurity companies in the form of heuristic virus detection.

Blockchain is one of the most recent technological buzzwords; following terms such as AI, machine learning and IoT.

At this stage, almost every business is intrigued by the concept. Some are even beginning to look for the means to begin their own adoption process.

IoT is now present in almost every newly made security camera on the planet. Machine Learning is being used to stand for customer service agents in the form of chatbots. AI is even being used by cybersecurity companies in the form of heuristic virus detection.

Blockchain, however, is the new kid on the block. The real-life use cases for the technology have predominantly been linked to Bitcoin – a decentralised digital currency first started in 2009.

While business leaders have undoubtedly seen the extolled virtues of blockchain, the rise of the technology has been hampered by a lack of understanding as to what it can be used for. More recently, the technology took a hit from the crash of Bitcoin prices at the start of the year.

With this in mind, one of the world’s largest enterprise software corporations (SAP) integrating blockchain into its flagship supply chain packages is something of an upset.

So, what is blockchain, and how can it be used in business?

Blockchain is, at least in the initial sense, a digital record of cryptocurrency transactions – a form of accounting software called distributed ledger technology (DLT). While it can be accessed by various parties it is, most importantly, encrypted, verifiable and public.

The element of this accounting development that has caught the eye of business is its application in supply chains – a glint that SAP is monopolising on. Blockchain technology, in essence, allows for a greater level of transparency and traceability, which means that businesses can be absolutely sure as to where their products are coming from, where they are in the supply chain, and whether the product purchased is truly the product that is paid for.

In simple terms, it dramatically lowers risk where previously there was only trust and uncertainty.

In May 2018, SAP’s blockchain lead, Torsten Zube, revealed the company was applying blockchain to agricultural supply chains through its “Farm to Consumer” initiative. More recently at SAP’s SapphireNow 2018 conference, the company took a stand with its “intelligent enterprise” undertaking.

The organisation announced a new range of partnerships and products to “enable enterprises to become more intelligent, with expanded capabilities from advanced technologies such as conversational artificial intelligence, blockchain and analytics for use within its Leonardo package.

Speaking on the Blockchain integration strategy, Zube noted that: “Networking along the traditional lines of value chains will be replaced by sharing data governance, resources, processes and practices and lead to joint learning opportunities.

If enterprises can access the complete version of product history," he explained, "this could result in a shift from a central unilateral supplier-led production to a consumer demand-led supply organised by a consortium of peers.

Of particular interest, however, is SAP’s refusal to tie itself into any one blockchain provider early. Speaking on its blockchain service at the Sapphire conference, Gil Perez, Senior Vice President for Product and Innovation and Head of Digital Customer Initiatives at SAP, confirmed that blockchain technology is still being defined… noting that the company is not looking to commit until the market decides which way to go – minimising the impact on customers as the technology evolves.

With this in mind, there’s one thing that’s certain – Blockchain has joined IoT, machine learning and AI as concepts with a significant number of applications in the real world. With SAP now integrating it into its supply chain offering, the ledger service has taken the first step towards more widespread adoption.

Source: MA

If you’re interested in a career in SAP or IoT call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

AI and robotics forecast to generate 7.2m jobs, more than will be lost due to automation

AI and robotics forecast to generate 7.2m jobs, more than will be lost due to automation

Artificial intelligence is set to create more than 7m new UK jobs in healthcare, science and education by 2037, more than making up for the jobs lost in manufacturing and other sectors through automation, according to a report.

A report from PricewaterhouseCoopers argued that AI would create slightly more jobs (7.2m) than it displaced (7m) by boosting economic growth. The firm estimated about 20% of jobs would be automated over the next 20 years and no sector would be unaffected.

AI and related technologies such as robotics, drones and driverless vehicles would replace human workers in some areas, but also create many additional jobs as productivity and real incomes rise and new and better products were developed, PwC said.

Increasing automation in factories is a long-term trend but robots such as Pepper, created by Japan’s Softbank Robotics, are beginning to be used in shops, banks and social care, raising fears of widespread job losses.

However, PwC estimated that healthcare and social work would be the biggest winners from AI, where employment could increase by nearly 1 million on a net basis, equivalent to more than a fifth of existing jobs in the sector.

Professional, scientific and technical services, including law, accounting, architecture and advertising firms, are forecast to get the second-biggest boost, gaining nearly half a million jobs, while education is set to get almost 200,000 extra jobs.

John Hawksworth, the chief economist at PwC, said: “Healthcare is likely to see rising employment as it will be increasingly in demand as society becomes richer and the UK population ages. While some jobs may be displaced, many more are likely to be created as real incomes rise and patients still want the ‘human touch’ from doctors, nurses and other health and social care workers.

“On the other hand, as driverless vehicles roll out across the economy and factories and warehouses become increasingly automated, the manufacturing and transportation and storage sectors could see a reduction in employment levels.”

PwC estimated the manufacturing sector could lose a quarter of current jobs through automation by 2037, a total of nearly 700,000.

Transport and storage are estimated to lose 22% of jobs – nearly 400,000 – followed by public administration and defence, with a loss of almost 275,000 jobs, an 18% reduction. Clerical tasks in the public sector are likely to be replaced by algorithms while in the defence industry humans will increasingly be replaced by drones and other technologies.

 

London – home to more than a quarter of the UK’s professional, scientific and technical activities – will benefit the most from AI, with a 2.3% boost, or 138,000 extra jobs, the report said. The east Midlands is expected to see the biggest net reduction in jobs: 27,000, a 1.1% drop.

Source: The Guardian

 

If you’re interested in a career in Artificial Intelligence or Robotics call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Yamaha artificial intelligence transforms a dancer into a pianist

Yamaha artificial intelligence transforms a dancer into a pianist

Yamaha AI enabled a world-renowned dancer Kaiji Moriyama to control a piano by his movements. The performance was accompanied by the Berlin Philharmonic Orchestra Scharoun Ensemble.

Yamaha Corporation is excited to announce that Yamaha artificial intelligence (AI) technology enabled a world-renowned dancer Kaiji Moriyama to control a piano by his movements. The concert, held in Tokyo on November 22, 2017, was entitled "Mai Hi Ten Yu"' and was sponsored by Tokyo University of the Arts and Tokyo University of the Arts COI. Yamaha provided an original system, which can translate human movements into musical expression by using AI technology, as technical cooperation for the concert.

 

 

Drawing on the system provided by Yamaha, Moriyama gave a brilliant performance with synchronized beautiful piano sound. Moreover, the performance was accompanied by other leading players, the Berlin Philharmonic Orchestra Scharoun Ensemble.

The concert performed by the talented players with Yamaha technology showed "a form of expression that fuses body movements and music."

Yamaha believes this performance represents steady progress in the pursuit of new forms of artistic expression and will continue to develop this technology to further expand the possibilities for human expression.

Technology Overview

The AI adopted in the system, which is now under development, can identify a dancer's movement in real time by analyzing signals from four types of sensors attached to a dancer's body. This system has an original database that links melody and movements, and, with this database, the AI on the system creates suitable melody data (MIDI) from the dancer's movements instantly. The system then sends the MIDI data to a Yamaha Disklavier™ player piano, and it is translated into music.

To convert dance movements into musical expression, the Yamaha Disklavier™ is indispensable because it can reproduce a rich range of sounds with extreme accuracy through very slight changes in piano touch. Moreover, we use a special Disklavier in the concert which was configured based on Yamaha flagship model CFX concert grand piano to express fully and completely the performance of the talented dancer Moriyama.

 

Source: Yamaha

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

The user experience of blockchain applications has a long way to go

The user experience of blockchain applications has a long way to go

People keep asking what is the killer app of the blockchain. It may already exist, if so, its hiding in a terrible experience.

Conor Fallon from Hackermoon tried out a social media blockchain application today called Minds. It turned out to be really bad. There were usability pot holes at every corner and the product seemed very bloated, trying to replace Facebook, Instagram and Twitter all at once.

Conceptually this product could be cool; its a social network that allows you to earn tokens while you interact with people. Then you can use those social tokens to promote your own posts or to cash in. What could go wrong? Well, a lot apparently.

From the first screen it is immediately clear that little to no usability testing has been done on this product. I would wager not even internally. This is indicative of an immature UX strategy.

In defense of the guys at Minds, this is quite a common experience I have had with dApps.

Why will someone use your blockchain app?

What we often hear when blockchain entrepreneurs talk about their platform and why people will flock to it is the following;

1. Data ownership will drive adoption

This is incorrect, this strategy is not a strong reason to join something. Data breaches are a reason to stop doing something.

I hear many blockchain entrepreneurs say “People are sick of Facebook mining their data” which may be true, but its only really relevant to a point. The tide may be turning on Facebook, but will it turn on Google? Probably not, its extremely difficult to live in a world without using Google. So the real insight here is that — yes some people may be sick of the government or companies spying on them, but it usually comes from a place of privilege, where you can say I would prefer to pay than get an ad driven service for free.

A reason to join something is because it allows me to do something I couldn’t otherwise do. Snapchat has funny dog faces that look like fun so I want to try that out.

2. Monitory incentive will drive adoption

Don’t assume that monetary incentive will solve all of your problems. In many instances monetary incentives actually work against participants, watch this:

 

Why are sites like Wikipedia and Mumsnet so good? Well, its because people are intrinsically motivated to help other people. What would paying participants actually do to the content of these sites? Do you think it would get better? Don’t assume that adding money into the mix is going to solve all of your issues.

You are far better aligning your features to the intrinsic motivations of your audience. And if you power the right parts of the site with monetary incentive, that may be the killer app.

The peasant and the chicken — a parable

This is a story from the biography of Che Guevara — which I read as an impressionable 14 year old, so the memory is a bit hazy, but it goes something like this:

Castro’s rebels liberated a Cuban town from an oppressive regime. In the aftermath Che spotted a depressed chicken farmer. “Why are you so sad?” asked Che “We liberated you from your oppressors!?” .

“Your soldiers ate my chickens. Before when Batista (the oppressor) came through our village, his soldiers also ate my chickens. When the next soldiers come, they will also eat my chickens”

And so it goes, the developer communities can argue about ideological implementations of networks and power distributions, but it may all be for naught if the product doesn’t actually do anything meaningful for people.

A plea to blockchain app designers

Learn the basics of product strategy, its been around since we’ve been designing physical products and has been tested time and time again; learn the Kano model, do research about your audiences, before and during product development. Test economic incentives before rolling them out. Learn about people’s inherent irrational biases. All this will lead to the adoption we now require to build our crypto-utopia.

 

Source: Hackermoon

 

If you’re interested in a career in Big Data call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

The New Bot Economy

The New Bot Economy

The robust supply chain created by the car manufacturers over time means car manufacturing today is an efficient process of putting together parts sourced specialized vendors located around the world. This speeds up the time to make each car to less than a few hours -- but more importantly makes it fully repeatable across many car models. The advent of streamlined automotive supply lines and assembly was a key evolution driven by accelerating demand for cars in the booming 50s and 60s. Today, the RPA market is at a similar stage of evolution with accelerating market demand driving innovation.

Justin Watson, Partner at Deloitte, spoke of the Bot Economy at Automation Anywhere’s IMAGINE London event ( watch video). His analogy: similar to how the car industry has evolved to an easy assembly of standard parts, building your RPA using pre-built components is the way of the future.

We could not agree more.

The robust supply chain created by the car manufacturers over time means car manufacturing today is an efficient process of putting together parts sourced specialized vendors located around the world. This speeds up the time to make each car to less than a few hours -- but more importantly makes it fully repeatable across many car models. The advent of streamlined automotive supply lines and assembly was a key evolution driven by accelerating demand for cars in the booming 50s and 60s. Today, the RPA market is at a similar stage of evolution with accelerating market demand driving innovation.

Building RPA bots should be a chaining together exercise using "plug & play" bots from the Bot Store. 

Here is a cool example.

Pre-built bots with built-in value

Rather than announcing handshakes and integration-at-a-distance type partnerships (that leave the burden of getting the integration up and running on the customer), we have set out to create a true "plug & play" experience using the Bot Store.

Our partners (SIs, ISVs, Integration partners, etc.) list their bots on the Bot Store alongside Automation Anywhere bots. Each bot encapsulates best practices that reflect many years of combined expertise across RPA deployments.

The ecosystem of bots is evolving and growing quickly. Every Automation Anywhere customer - independent of the stage of their RPA journey, or specific affiliations/industries/processes – can leverage the built-in value of our ecosystem of – dare we say – perfected bots.

What exactly is the Automation Anywhere Bot Store?

It’s a true marketplace of pre-built bots that connects customers with bot creators. Easily search, assess, and select bots based on capabilities that are peer reviewed to evaluate usage experience.

The Bot Store will showcase the best bots across many business applications (Salesforce, SAP, Zendesk, ServiceNow, etc.) built both by Automation Anywhere and our valuable partners. That makes it fundamentally different from one-off partnerships or technology alliances based on published APIs and community libraries.

Benefit now

Bot Store is available now. Search and pick and choose bots of immediate value and relevance to your stage of the RPA journey.

Only a few weeks since launching Bot Store, and the response has been overwhelming. As we see the sheer volume of bot downloads it reinforces our belief that this will truly accelerate the race to ROI on RPA investments and enable enterprises to achieve their RPA goals in very short time.

 

Source: Automation Anywhere

 

If you’re interested in a career in Robotics call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Amazon AI predicts users’ musical tastes based on playback duration

Amazon AI predicts users’ musical tastes based on playback duration

AI engineers at Amazon have developed a novel way to learn users’ musical tastes and affinities.

AI engineers at Amazon have developed a novel way to learn users’ musical tastes and affinities — by using song playback duration as an “implicit recommendation system.” Bo Xiao, a machine learning scientist and lead author on the research, today described the method in a blog post ahead of a presentation at the Interspeech 2018 conference in Hyderabad, India.

Distinguishing between two similarly titled songs — for instance, Lionel’s Richie’s “Hello” and Adele’s “Hello” — can be a real challenge for voice assistants like Alexa. One way to resolve this is by having the assistant always choose the song that the user is expected to enjoy more, but as Xiao notes, that’s easier said than done. Users don’t often rate songs played back through Alexa and other voice assistants, and playback records don’t necessarily provide insight into musical taste.

“To be as useful as possible to customers, Alexa should be able to make educated guesses about the meanings of ambiguous utterances,” Xiao wrote. “We use machine learning to analyze playback duration data to infer song preference, and we use collaborative-filtering techniques to estimate how a particular customer might rate a song that he or she has never requested.”

The researchers found a solution in song duration. In a paper (“Play Duration based User-Entity Affinity Modeling in Spoken Dialog System”), Xiao and colleagues reasoned that people will cancel the playback of songs they dislike and let songs they enjoy continue to play, providing a dataset on which to train a machine learning-powered recommendation engine.

They divided songs into two categories: (1) songs that users played for less than 30 seconds and (2) songs that they played for longer than 30 seconds. Each was represented as a digit in a matrix grid — the first category was assigned a score of negative one, and the second a score of positive one.

To account for playback interruptions unrelated to musical preference, such as an interruption that caused a user to stop a song just as it was beginning, they added a weighting function. Songs received a greater weight if they were played back for 25 seconds instead of one second, for example, or for three minutes instead of two minutes.

When evaluated against users’ inferred affinity scores, the correlation was strong enough to demonstrate the model’s effectiveness, Xiao said. Furthermore, it implied that it’s good for more than music — in the future, the researchers plan to apply it to other content, such as audiobooks and videos.

Source: Venturebeat

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Chatbots: Redefining Customer Service for Growing Companies

Chatbots: Redefining Customer Service for Growing Companies

The workplace energy of a small and midsize technology business is unlike anything seen in a large enterprise. In the midst of a fast-paced, caffeine-fueled day, the freedom to make a difference at work is always present. This spirit is certainly not limited to growing companies, but it’s certainly easier to spot these opportunities and see the fruits of your labor in one.

The workplace energy of a small and midsize technology business is unlike anything seen in a large enterprise. In the midst of a fast-paced, caffeine-fueled day, the freedom to make a difference at work is always present. This spirit is certainly not limited to growing companies, but it’s certainly easier to spot these opportunities and see the fruits of your labor in one.

However, not every function may feel the love—especially customer service representatives. They are on the front line, fielding calls from customers who are often angry or disappointed. Whether the instructions were misread, passcodes were forgotten, or a new part is needed, reps can feel burned out after working on the same requests day after day.

It doesn’t have to be this way for customer service reps. By combining machine learning with the recent slew of chatbots, service organizations have a distinct opportunity to focus on the experience of each interaction from the perspective of the customer and the rep.

What Are Chatbots? And How Does Machine Learning Make Them Even Better?

Chatbots are computer programs that mimic human-to-human written and voice-enabled communication by using artificial intelligence. From self-initiating a series of tasks to holding a quasi-natural, two-way conversation, this technology is beginning to change how consumers and the brands they love engage with each other online, on the phone, and even through e-mail.

Suppose you wanted to know if today’s ballgame will be rained out. If a chatbot is not available, you would direct your browser to weather.com, for example, and then type in your zip code for the forecast. However, the use of a chatbot can turn this experience into a fast, more-meaningful interaction. For instance, the Weather Channel’s chatbot allows you to send a chat text asking for current conditions of a three-day forecast. And immediately, the chatbot replies.

Yes, this is a very simplistic example of a chatbot. But with artificial intelligence evolving into more sophisticated forms, such as machine learning, chatbots no longer need to be governed by just a series of preprogrammed rules, scripts, and prompts. Now, they can pull from the entire company’s collective expertise and experience and sift through it all to find the best-possible resolutions to a customer’s query.

Directing Interest in Machine Learning towards a More-Rewarding Service Experience

For years, technology firms have been primarily focused on setting a digital foundation with tools such as the cloud, Big Data, and analytics. However, some of that attention is now being pulled towards machine learning to turbocharge their business processes, decision-making, and customer interactions.

In fact, the Oxford Economics study, “The Transformation Imperative for Small and Midsize Technology Companies,” suggests a higher rate of investment in machine learning among technology firms than their peers in other industries. Although adoption numbers were still low at 6% for small and midsize technology companies in 2017, that same figure is projected to become more substantial as it nearly quadruples in 2019. Technology firms are leading the way, but companies in other industries should also consider how these tools can support their customer service function.

That said, chatbots present a clear opportunity for embracing machine learning in a way that is profoundly human, efficient, and meaningful without breaking the budget. They can help automate simple tasks, provide immediate service, and trigger specific, rules-based actions—whether a customer contacts the business through a messaging app, social media, phone, or e-mail—by learning how reps resolve frequently occurring queries. By mimicking simple, real-life conversations, chatbots can quickly become a low-cost way to offer around-the-clock customer assistance.

Chatbots can also transition the customer service organization from a point of customer interaction to a source of business intelligence and marketing opportunities. As the technology addresses customer issues and triggers processes, it captures every request, piece of feedback, and action and pushes it into a cloud-based ERP system that every business area can assess. Marketing and sales teams, for example, can use this information to find new opportunities for cross- or up-selling, new promotions, bundled offers, and even new services.

Investing in Chatbots Drives Untapped Value for Customer Service

With all the above said, it may seem that chatbots are a natural next step for small and midsize companies in all industries to expand their customer service capabilities. However, it can be intimidating to go through the process of producing them.

Here’s the good news: there’s more than one way to design a chatbot.

Businesses can choose to develop their own bot with a low-cost app, subscription-based cloud service, usage-based collaborative bot platform, or technology partner. But no matter the chosen path, the development process must be defined by specific capability needs, data to be accessed and captured, system integration requirements, and intended goals. It is very important to find a platform—such as recast.ai—that doesn’t limit API calls and allows the creation of unlimited bots within a few minutes or hours, rather than weeks and months.

When matched closely to the needs of customer service reps and customers, chatbots can deliver a potential benefit that is more valuable than the price tag itself. No one likes to be bogged down by repetitive, mundane tasks that provide no real value to the company’s growth. But if chatbots take on those activities, the technology may be the godsend that customer service reps need to handle more-challenging exceptions that allow them to learn and grow their skills and contribute directly to the bottom line.

Source: SAP

If you’re interested in a career in Artificial Intelligence or SAP call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

An approach to logical cognition and rationality in artificial intelligence

An approach to logical cognition and rationality in artificial intelligence

The ability to think logically is what distinguishes man from all other animals. Plato believed that we all are  born with something called a “rational soul”, or some essential property of all human beings that gave us the unique ability to think in logical and abstract ways. The result of possessing a rational soul, according to him, is the ability access some ‘higher plane’ of reality, which consists of so-called “forms”, or idealized representations of things by which our physical world and everything in it can be described, in terms of how well the physical objects conform to the ideal representations given by the forms. The duty of man is then to sculpt our physical world to better fit these forms, and thus move ourselves toward some “perfect” idealized state and effectively progressing humanity.

1. Philosophical Overview

The ability to think logically is what distinguishes man from all other animals. Plato believed that we all are  born with something called a “rational soul”, or some essential property of all human beings that gave us the unique ability to think in logical and abstract ways. The result of possessing a rational soul, according to him, is the ability access some ‘higher plane’ of reality, which consists of so-called “forms”, or idealized representations of things by which our physical world and everything in it can be described, in terms of how well the physical objects conform to the ideal representations given by the forms. The duty of man is then to sculpt our physical world to better fit these forms, and thus move ourselves toward some “perfect” idealized state and effectively progressing humanity.

While this idea is now generally considered outdated, the ideas of later philosophers contain many elements originally put forth by Plato in his description of forms, and are generally based on a more psychological approach, taking into account the cognitive processes that give rise to the idealized representations with which our experiences can be framed. The influence of Plato seems evident in the work of Kant and his description of “schemata”, or generalized models of things defined by logical relations. According to Kant, our minds have an inherit understanding of time and space, which gives rise to a sort of “mathematical intuition” that can be used to comprehend our patterns of perception. We can apply this intuition to build schematic structures, and effectively “plug in” perceptual information into these structures to better understand our world in a logical, ration way.

Considering the philosophical background of these ideas, I will propose, outline, and describe a computer-scientific method in which an intelligent agent may utilize the process of “logical framing” to classify, organize, comprehend its experience in a rational way, utilizing its observations to learn about the environment and improve its ability to make generalizations, predictions, and decisions about the world.

2. Technical Introduction

Our description of logical framing as a process of rational comprehension of perceptual experience by an intelligent agent begins with the definition of “templates” and “objects”. Templates are similar to forms and schemata, and objects are similar to perceptual patterns. While both are network-like structures of data, they differ in both content and function.

The nodes of an object represent the elements or parts of some external thing, and the links represent the relations between elements. Elements are defined by “descriptive properties” that exist along any number of dimensions (e.g. spatial, temporal, etc.), and relations are defined by “distinctive properties” that exist along dimensions shared by each of the elements to which the relation is connected. Descriptive properties can be thought of as values (e.g. spatial position) and distinctive properties as differences between values (e.g. spatial distance).

The nodes of a template, however, are descriptive functions over an object’s elements, and the links are distinctive functions over an object’s relations. Descriptive functions take the value of a descriptive property of a given element as input, and a distinctive function takes the value of a distinctive property of a given relation, which may also be understood as taking the values of a descriptive property from two connected elements.

3. Classification Process

Descriptive and distinctive functions return some indication of truth, either in the form of a binary value (i.e. 1 or 0) or in the form of a probabilistic value (i.e. between 1 and 0). The truth value returned by a function determines how well a given element or set of elements fit the logical definition provided by the template. Therefore the sum of truth values returned by the descriptive and distinctive functions of a template provide a fitness measurement for a given object. In more philosophical terms, the total truth value determines the degree in which a given object “participates” in the “essence” of a template.

The degree of participation of a given object with respect to a particular template can be used to decide whether how a given object is classified. A high degree of participation indicates that a given object is likely to be an “instance” of a particular template, and thus whether it can be classified as said template. This classification process requires some method of mapping a given object to a particular template, which consists of selecting a valid set of elements whose topology fits that of a particular template, as well as whose descriptive and distinctive properties fit the descriptive and distinctive functions of that template (i.e. result in an adequately high truth value).

4. Mapping Process

The complexity of objects and templates can scale to theoretical infinity. This means that the mapping process which produces a valid classification for a given object cannot be performed by considering the entirety of an object or template all at once. Therefore the solution is to consider each piece of an object or template in isolation, by starting at a single node called the “current node” and taking only its neighbors into consideration. The neighboring nodes of the template are then individually “filled” by the neighboring elements of the object, where each time a valid element is selected for a specific node, the current node is moved to that which was filled. This results in a depth-first search for the optimal mapping between a given object and a particular template, which then provides a method of measuring the degree of participation for an object with respect to a template, and eventually allows the successful classification of an object given the total truth values for a set of templates and selecting the highest one as the best-known option.

Each time all the neighbors of the current node are successfully filled, the previous node becomes the current node once again and the remaining neighbors are filled. This process occurs until the neighbors of the initial node are successfully filled, resulting in the calculation of the degree of participation for the current mapping. For each neighbor of the current node at any given time throughout the mapping process, the set of potential elements is found by first computing the truth values for the descriptive function associated with the neighbor, given each possible element.

This allows the set of potential elements to be reduced such that only the elements which satisfy the descriptive function remain. Then, each remaining element is passed to the distinctive function associated with the link between the current node and the neighbor, along with the element filling the current node which was previously selected. The set of potential elements is again reduced to only those elements which satisfy both the descriptive as well as the distinctive functions associated with the neighbor. In the event that a potential element is chosen and then later proves invalid, it is simply removed from the set of potential elements and another is selected for that neighbor. Through trial and error, the best-known mapping between an object and a template can be found and the degree of participation may be calculated.

5. Object Construction

When an intelligent agent receives perceptual input, an object is constructed that represents the information observed. However, a filtering step must occur before this construction process. Visual perception, for instance, requires first an edge detection step that produces a space of black and white cells, where the edges are highlighted by white cells and everything else is black.

Once the edges are found, a set of “base templates” are applied to the space. Base templates are unique from other templates in that their topological properties are such that one node in the template is connected to all other nodes, and no other connections exist. This is called a star topology. The “central node”, or the node connected to all others, is assigned to the element of a given position in the space, and the neighborhood around that position fill the other nodes in the template. The descriptive functions of a base template are restricted, and may only denote the presence or absence of an edge. The distinctive functions are fixed and denote the horizontal and vertical differences between the positions of elements. The base templates are moved along the space to classify the edge-patterns of each subspace, since each base template can only consider a small array of cells at a time.

The result is another space containing the abstracted objects derived from the classification of subspaces. This new space is smaller than the previous, and the templates to which its subspaces are mapped do not conform to the strict topological constraints that those at the previous level must comply. While the templates at this level do have restrictions on size, their topologies can take on a variety of forms, and their functions may vary in both the contents on which they act as well as the specific type of calculations they perform.

6. Object Abstraction

Templates are learned through experience and observation of objects. Sets of observed objects are clustered according to shared properties, as well as equivalent values of said properties. By grouping together like objects, the functions of a template may be composed in order to best describe the commonalities between objects in a particular group.

Template development follows a certain logic to determine how the functions ought to be composed. By following a set of “development rules”, a template is constructed by analyzing a set of grouped objects and extracting the attributes that describe them. The first development rule indicates the process by objects are grouped together. It states that the likelihood of two objects, A and B, belonging to the same group corresponds to the ratio between the number of equivalent properties between A and B and the number of shared properties between A and B, and the ratio between the number of shared properties between A and B and the average total number of properties of A and B.

Source: Signified Origins

 

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Understanding the ‘black box’ of artificial intelligence

Understanding the ‘black box’ of artificial intelligence

Artificial intelligence (AI) is playing an increasingly influential role in the modern world, powering more of the technology that impacts people’s daily lives.

And as the technology progresses and becomes ever-more complex and autonomous, it also becomes harder to understand, not just for the end users, but even for the people who built the platforms in the first place. This has raised concerns about a lack of accountability, hidden biases, and the ability to have clear visibility of what is driving life-changing decisions and courses of action.

Artificial intelligence (AI) is playing an increasingly influential role in the modern world, powering more of the technology that impacts people’s daily lives.

For digital marketers, it allows for more sophisticated online advertising, content creation, translations, email campaigns, web design and conversion optimization.

Outside the marketing industry, AI underpins some of the tools and sites that people use every day. It is behind the personal virtual assistants in the latest iPhone, Google Home, and Amazon Echo. It is used to recommend what films you watch on Netflix or what songs you listen to on Spotify, steers conversations you have with your favorite retailers, and powers self-driving cars and trucks that are set to become commonplace on roads around the world.

What is perhaps less widely known is that AI may also decide whether you are approved for a loan, determine the outcome of a bail application, identify threats to national security, or recommend a course of medical treatment.

And as the technology progresses and becomes ever-more complex and autonomous, it also becomes harder to understand, not just for the end users, but even for the people who built the platforms in the first place. This has raised concerns about a lack of accountability, hidden biases, and the ability to have clear visibility of what is driving life-changing decisions and courses of action.

These concerns are particularly prevalent when looking at the uses of deep learning, a form of artificial intelligence that requires minimal guidance, but ‘learns’ as it goes through identifying patterns from the data and information it can access. It uses neural networks and evolutionary algorithms which are essentially AI being built by AI, and can quickly resemble a tangled mess of connections that are nearly impossible for analysts to disassemble and fully understand.

What are neural networks?

The neural networks behind this new breed of deep machine learning are inspired by the connected neurons that make up a human brain. They use a series of interconnected units or processors and are adaptive systems that can adjust their outputs based on doing, essentially ‘learning’ by example as they go, adapting their behavior based on results.

This mimics evolution in the natural world, but at a much faster pace, with the algorithms quickly adapting to the patterns and results discovered to become increasingly accurate and valid.

Neural networks can identify patterns and trends among data that would be too difficult or time-consuming to deduce through human research, consequently creating outputs that would otherwise be too complex to manually code using traditional programming techniques.

This form of machine learning is very transparent on some levels, as it reflects the human behavior of trial and error, but at a speed and scale that wouldn’t otherwise be possible. But it is this speed and scale that makes it hard for the human brain to drill down into the expanding processes and keep track of the millions of micro-decisions that are powering the outputs.

 

Why transparency is important in artificial intelligence

What is black box AI? Put simply it is the idea that we can understand what goes in and what comes out, but don’t understand what goes on inside.

As AI is used to power more and more high profile and public facing services, such as self-driving cars, medical treatment, or defense weaponry, concerns have understandably been raised about what is going on under the hood. If people are willing to put their lives in the hands of AI-powered applications, then they would want to be sure that people understand how the technology works and how it makes decisions.

The same is true of business functions. If you’re a marketer entrusting AI to design and build your website or make important conversion optimisation decisions on your behalf, then wouldn’t you want to understand how it works? After all, design changes or multivariate tests can cost or make a business millions of dollars a year.

There have been calls to end the use of ‘black box’ algorithms in government because, without true clarity on how they work, there can be no accountability for decisions that affect the public. Fears have also been raised over the use of bias within decision-making algorithms, but with a perceived lack of a due process in place to prevent or protect against this.

There is also a strong case for making AI systems accountable and open to interrogation a legal as well as an ethical right. If machines are making life-changing decisions, then it stands to reason that those decisions should be able to be held up to the highest scrutiny.

report from AI Now, an AI institute at NYU, has warned that public agencies and government departments should rethink the AI tools they are using to ensure they are accountable and opaque when used for making far-reaching decisions that affect the lives of citizens.

So are all these fears over black box AI well founded, and what can be done to reassure users about what is going on behind the machines?

Work on a need to know basis

Many digital marketers and designers have an overall understanding of digital processes and systems, but not necessarily a deep understanding of how all of those things work. Many functions are powered by complex algorithms, code, programming or servers, and yet are still deemed trustworthy enough for investing large chunks of the marketing budget.

Take SEO, for example. How Google ranks search results is a notoriously secret formula. But agencies and professionals make careers out of their own interpretation of the rules of the game, trying to deliver what they think Google wants to be able to boost their rankings.

Similarly, Google AdWords and Facebook Ads have complex AI behind them, yet the inner workings of the auctions and ad positions are kept relatively quiet behind closed doors of the internet giants. While there is an argument that such companies should be more transparent when they wield such power, this doesn’t stop marketers from investing in the platforms. Not understanding the complexities does not stop people being able to optimize their campaigns, instead focusing on what goes in and monitoring the results to gain an understanding of what works best.

There is also an element of trust in these platforms that if you play by the rules that they do publicize and work to improve your campaigns, then their algorithms will do the right thing with your data and your advertising spend.

By choosing reputable machine learning platforms and constantly monitoring what works, you can feel confident with the technology, even if you don’t have a clear understanding of the complex workings behind them.

A lot of people will also put their trust in mass-market AI hardware, without expecting to understand what’s inside the black box. A layman who drives a regular car with no real understanding of how it changes gear, is no more in the dark than somebody who does not know how their self-driving car changes direction.

But of course, there is a key distinction between end users understanding something, and those who can hold it accountable having clarity over how and why an autonomous vehicle chose its path. Accident investigators, insurance assessors, road safety authorities, car maintenance companies, would all have a vested interest in understanding how and why driving decisions are made.

Deep learning networks could be made up of millions, billions or even trillions of connections. Therefore, auditing each connection in order to understand every decision would often be unmanageable, convoluted and potentially impossible to interpret. So, if attempting to address concerns over accountability and opacity of AI networks, then it’s important to prioritize what you need to know, what you want to understand and why.

Deep learning can be influenced by its teachers

As we’ve seen, deep learning is in some ways a high volume system of trial and error, testing out what works and what doesn’t, identifying measures of success, and building on the wins. But humans don’t evolve through trial and error alone; there’s also teaching passed down to help shape our actions.

Eat a bunch of wild mushrooms foraged in the forest, and you’ll find out the hard way which ones are poisonous. But luckily we’re able to learn from the errors of those who’ve gone before us, and we also make decisions on imparted as well as acquired knowledge. If you read a book on mushrooms or go out with an experienced forager then they can tell you which ones to avoid, so you don’t have to go through the gut-wrenching trial and error of eating the dangerous varieties.

Likewise, many neural networks allow information to be fed into them to help shape the decision-making process. This human influence should give a level of reassurance that the machines are not making all their decisions based only on black box experiences of which we don’t have a clear view.

To use AI-powered optimization platform Sentient Ascend as an example, it needs input from your CRO team in the shape of hypotheses and testing ideas in order to run successful tests.

In other words, Ascend uses your own building blocks and then uses evolutionary algorithms to identify the most powerful combinations and variations of those building blocks. You’re not giving free reign to an opaque AI tool to decide how to optimize your site, but instead harnessing the power and scale of AI in order to test more of your ideas, faster and more efficiently.

Focus on your key results

As we’ve seen, when it comes to cracking open the black box of AI tools in marketing, it raises the question of how many of your other marketing tools do you truly understand? For performance-based professionals, AI offers another tool for your belt, but the most important thing is whether it is delivering the results you need.

You should be measuring and testing your tools and strategies with AI tools, as with any other technology. This gives you visibility of what is working for your business.

By adopting CRO principles of testing, measuring and learning, this should give you the confidence that any business decisions you make are based on AI are solid and reliable – even if you couldn’t stand in front of your CEO and explain the nitty gritty of how each connected node under the hood worked together.

But despite the opaque reputation, many AI-powered platforms do allow users to peek inside the black box. Evolutionary algorithms which make their decisions based on trial and error can also be a little easier to understand for those without expert knowledge in machine learning processes.

Sentient Ascend users, for example, get access to comprehensive reporting, which includes graphs allowing you to hone in on the performance of each different design candidate. This allows full visibility to understand the ‘thought process’ behind the algorithms’ decisions to progress or remove certain variations.

Of course, scale can be a sticking point for those who want to deep dive into the inner workings of the software. The advantage of using AI to power your optimization testing is that it can run tests at a greater volume and scale than traditional, manually-built A/B testing tools. Therefore spending time to go back through and investigate every single variation could be very time-consuming. For example, what appears to be a relatively simple layout above the fold could easily have a million different variations to be tested.

The same applies to many other use cases for AI. If you’re using machine learning to analyze different datasets to be able to predict stock price changes, then going back in to check every data point assessed is not going to be a very efficient use of time. But it’s reassuring to know that the option is there to delve into the data should you need to audit performance or get a deeper understanding.

But this volume of data is why it’s important to prioritize what the KPIs are that are most important to you. And if you are measuring against your key business metrics and getting positive results, then the idea of taking a slight leap of faith as to how the black box tools deliver their results becomes much easier to swallow. Carry out due process of the tools you use, and you should be willing to accept accountability yourself for the results they deliver.

Making the machines more accountable

It’s the convoluted and complex nature of neural networks that can make them difficult to interrogate and understand. There are so many layers and a tangled web of connections that lead to outputs, that detangling them can seem a near-impossible task.

But many systems are now having some additional degrees of accountability built into them. MIT’s Regina Barzilay has worked on an AI system for mining pathology reports, but added in an additional step whereby the system pulls out and highlights snippets of text that represent a pattern discovered by the network.

Nvidia, which develops chips to power autonomous vehicles, has been working on a way of visually highlighting what the system focuses on to make its driving decisions.

While such steps will help offer reassurances and some clarity as to how deep learning networks arrive at decisions, many AI platforms are still some way off being able to offer a completely opaque view. It seems natural that in a world becoming increasingly reliant on AI, there will need to be an element of trust involved as to how it works, in the same way that there is inherent trust in humans who are responsible for decision making. Jury members are not quizzed on exactly what swayed their decision, nor are their brain activities scanned and recorded to check everything is functioning as planned. Yet jury decisions are still upheld by law on good faith.

With the evolving complexity of AI, it is almost inevitable that some of its inner workings will appear to be a black box to all but the very few who can comprehend how they work. But that doesn’t mean accountability is out of the question. Use the data you have available, identify the key information you need to know, and make the most of the reporting tools within your AI platforms, then the black box machines will not appear as mysterious as first feared.

Source: Sentient.ai

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Medrobotics Flex Robotic systen is changing how medical care is given

Medrobotics Flex Robotic systen is changing how medical care is given

This robotic system helps surgeons reach complex anatomical locations.

This robotic system helps surgeons reach complex anatomical locations, with Medrobotics Flex Robotic systen they are changing how medical care is given.

 

The Flex Robotic System offers a stable surgical platform, excellent instrument triangulation.

If you’re interested in a career in Robotics call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Can AI Write its Own Applications?

Can AI Write its Own Applications?

Early last year, a Microsoft research project dubbed DeepCoder announced that it had made progress creating AI that could write its own programs

Early last year, a Microsoft research project dubbed DeepCoder announced that it had made progress creating AI that could write its own programs.

Such a feat has long captured the imagination of technology optimists and pessimists alike, who might consider software that creates its own software as the next paradigm in technology – or perhaps the direct route to building the evil Skynet.

As with most machine learning or deep learning approaches that make up the bulk of today’s AI, DeepCoder was creating code that it based on large numbers of examples of existing code that researchers used to train the system.

The result: software that ended up assembling bits of human-created programs, a feat Wired Magazine referred to as ‘looting other software.’

And yet, in spite of DeepCoder’s PR faux pas, the idea of software smart enough to create its own applications remains an area of active research, as well as an exciting prospect for the digital world at large.

The Notion of ‘Intent-Based Programming’

What do we really want when we say we want software smart enough to write applications for us? The answer: we want to be able to express our intent for the application and let the software take it from there.

The phrase ‘intent-based’ comes from the emerging product category ‘intent-based networking,’ an AI-based approach to configuring networks that divines the business intent of the administrator.

An intent-based networking system (IBNS) enables admins to define a high-level business policy. The IBNS then verifies that it can execute the policy, manipulates network resources to create the desired state, and monitors the state of the network to ensure that it is enforcing all policies on an ongoing basis, taking corrective action when necessary.

Intent-based programming, by extension, takes the concept of intent-based networking and extends it to any type of application a user might desire.

For example, you could ask Alexa to build you an application that, say, kept track of your album collection. It would code it for you automatically and present the finished, working application to you, ready for use.

What Might Be Going on Under the Covers

In the simple Alexa example above, the obvious approach for the AI to take would be to find an application similar to the one the user requested, and then make tweaks to it as necessary, or perhaps assemble the application out of pre-built components.

In other words, Alexa would be following a similar technique as DeepCoder, borrowing code from other places and using those bits and pieces as templates to meet a current need.

But assembling templates or other human-written code isn’t what we really mean by AI-written software, is it? What we’re really looking for is the ability to create applications that are truly novel, and thus most of their inner workings don’t already exist in some other form.

In other words, can AI be creative when it creates software? Can it create truly novel application behavior, behavior that no human has coded before?

5GLs to the Rescue

Using software that can take the intent of the user and generate the desired application has been a wish-list item for computer science researchers for decades. In fact, the Fifth Generation Language (5GL) movement from the 1980s sought to “make the computer solve a given problem without the programmer,” according to Wikipedia.

The idea with 5GLs was for users to express their intent in terms of constraints, which the software would then translate into working applications. This idea appeared promising but turned out to have limited applicability.

The sorts of problems that specifying constraints alone could solve turned out to be a rather small set: mostly mathematical optimization tasks that would seek a mathematical solution to a set of mathematical expressions that represented the constraints.

The challenge facing the greater goal of creating arbitrary applications was that 5GLs weren’t able to express algorithms – the sequence of steps programmers specify when they write code by hand.

As a result, 5GLs didn’t really go anywhere, although they did lead to an explosion of declarative, domain-specific languages like SQL and HTML – languages that separate the representation of the intent of users from the underlying software.

But make no mistake: expressing your intent in a declarative language is very different from software that can create its own applications. Writing SELECT * FROM ALBUMLIST is a far cry from ‘Alexa, build me an app that keeps track of my albums.’

The missing piece to the 5GL puzzle, of course, is AI.

A Question of Algorithms

In the 1980s we had no way for software to create its own algorithms – but with today’s AI, perhaps we do. The simple optimization tasks that 5GLs could handle have grown into full-fledged automated optimization for computer algebra systems, which would qualify as computer-generated algorithms. However, these are still not general purpose.

There are also research projects like Google AutoML, which creates machine learning-generated neural network architectures. You can think of a neural network architecture as a type of application, albeit one that uses AI. So in this case, we have AI that is smart enough to create AI-based applications.

AutoML and similar projects are quite promising to be sure. However, not only have we not moved much closer to Skynet, but such efforts also fall well short of the intent-based programming goal I described earlier.

The Context for Human Intent

Fundamentally, AutoML and intent-based programming are going in different directions, because they have different contexts for how users would express their intent. The Alexa example above is unequivocally human-centric, as it leverages Alexa’s natural language processing and other contextual skills to provide a consumer-oriented user experience.

In the case of AutoML (or any machine learning or deep learning effort, for that matter), engineers must express success conditions (i.e., their intent) in a formal way.

If you want to teach AI to recognize cat photos, for example, this formal success condition is trivial: of a data set containing a million images, these 100,000 have cats in them. Either the software gets it right or it doesn’t, and it learns from every attempt.

What, then, is the formal success condition for ‘the album tracking application I was looking for’? Answering such a question in the general case is still beyond our abilities.

Today’s State of the Art

Today’s AI cannot create an algorithm that satisfies a human’s intent in all but the simplest cases. What we do have is AI that can divine insights from patterns in large data sets.

If we can boil down algorithms into such data sets, then we can make some headway. For example, if an AI-based application has access to a vast number of human-created workflows, then it can make a pretty good guess as to the next step in a workflow you might be working on at the moment.

In other words, we now have autocomplete for algorithms – what we call ‘next best action.’ We may still have to give our software some idea of how we want an application to behave, but AI can assist us in figuring out the steps that make it work.

The Intellyx Take

AI that can provide suggestions for the next best action but cannot build an entire algorithm from scratch qualifies more as Augmented Intelligence than ArtificialIntelligence.

When we are looking for software that can satisfy human intent, as opposed to automatically solving a problem on its own, we’re actually looking for this sort of collaboration. After all, we still want a hand in building the application – we just want the process to be dead simple.

It’s no surprise, therefore, that the burgeoning low-code/no-code platform market is rapidly innovating in this direction.

Today’s low-code/no-code platforms support sophisticated, domain-specific declarative languages that give people the ability to express their intent in English-like expressions (or other human languages of choice).

They also have the ability to represent apps and app components as templates, affording users the ability to assemble pieces of applications with ‘drag and drop’ simplicity.

And now, many low-code/no-code platform vendors are adding AI to the mix, augmenting the abilities of application creators to specify the algorithms they intend their applications to follow.

Someday, perhaps, we’ll simply pick up our mic and tell such platforms what we want and they’ll build it automatically. We’re not quite there yet, but we’re closer than we’ve ever been with today’s low-code/no-code platforms – and innovation is proceeding at a blistering pace. It won’t be long now.

Source: IoT.sys

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Essentials of Deep Learning: Introduction to Long Short Term Memory

Essentials of Deep Learning: Introduction to Long Short Term Memory

Introduction

Sequence prediction problems have been around for a long time. They are considered as one of the hardest problems to solve in the data science industry. These include a wide range of problems; from predicting sales to finding patterns in stock markets’ data, from understanding movie plots to recognizing your way of speech, from language translations to predicting your next word on your iPhone’s keyboard.

Introduction
Sequence prediction problems have been around for a long time. They are considered as one of the hardest problems to solve in the data science industry. These include a wide range of problems; from predicting sales to finding patterns in stock markets’ data, from understanding movie plots to recognizing your way of speech, from language translations to predicting your next word on your iPhone’s keyboard.
With the recent breakthroughs that have been happening in data science, it is found that for almost all of these sequence prediction problems, Long short Term Memory networks, a.k.a LSTMs have been observed as the most effective solution.

LSTMs have an edge over conventional feed-forward neural networks and RNN in many ways. This is because of their property of selectively remembering patterns for long durations of time.  The purpose of this article is to explain LSTM and enable you to use it in real life problems.  Let’s have a look!
Note: To go through the article, you must have basic knowledge of neural networks and how Keras (a deep learning library) works. You can refer the mentioned articles to understand these concepts:
•    Understanding Neural Network From Scratch
•    Fundamentals of Deep Learning – Introduction to Recurrent Neural Networks
•    Tutorial: Optimizing Neural Networks using Keras (with Image recognition case study)
 
Table of Contents
1.    Flashback: A look into Recurrent Neural Networks (RNN)
2.    Limitations of RNNs
3.    Improvement over RNN : Long Short Term Memory (LSTM)

4.    Architecture of LSTM
1.    Forget Gate
2.    Input Gate
3.    Output Gate
5.    Text generation using LSTMs.
 
1. Flashback: A look into Recurrent Neural Networks (RNN)
Take an example of sequential data, which can be the stock market’s data for a particular stock. A simple machine learning model or an Artificial Neural Network may learn to predict the stock prices based on a number of features: the volume of the stock, the opening value etc. While the price of the stock depends on these features, it is also largely dependent on the stock values in the previous days. In fact for a trader, these values in the previous days (or the trend) is one major deciding factor for predictions.
In the conventional feed-forward neural networks, all test cases are considered to be independent. That is when fitting the model for a particular day, there is no consideration for the stock prices on the previous days.
This dependency on time is achieved via Recurrent Neural Networks. A typical RNN looks like:

This may be intimidating at first sight, but once unfolded, it looks a lot simpler:

2. Limitations of RNNs

Recurrent Neural Networks work just fine when we are dealing with short-term dependencies. That is when applied to problems like:

RNNs turn out to be quite effective. This is because this problem has nothing to do with the context of the statement. The RNN need not remember what was said before this, or what was its meaning, all they need to know is that in most cases the sky is blue. Thus the prediction would be:

However, vanilla RNNs fail to understand the context behind an input. Something that was said long before, cannot be recalled when making predictions in the present. Let’s understand this as an example:

 

Here, we can understand that since the author has worked in Spain for 20 years, it is very likely that he may possess a good command over Spanish. But, to make a proper prediction, the RNN needs to remember this context. The relevant information may be separated from the point where it is needed, by a huge load of irrelevant data. This is where a Recurrent Neural Network fails!

The reason behind this is the problem of Vanishing Gradient. In order to understand this, you’ll need to have some knowledge about how a feed-forward neural network learns. We know that for a conventional feed-forward neural network, the weight updating that is applied on a particular layer is a multiple of the learning rate, the error term from the previous layer and the input to that layer. Thus, the error term for a particular layer is somewhere a product of all previous layers’ errors. When dealing with activation functions like the sigmoid function, the small values of its derivatives (occurring in the error function) gets multiplied multiple times as we move towards the starting layers. As a result of this, the gradient almost vanishes as we move towards the starting layers, and it becomes difficult to train these layers.

A similar case is observed in Recurrent Neural Networks. RNN remembers things for just small durations of time, i.e. if we need the information after a small time it may be reproducible, but once a lot of words are fed in, this information gets lost somewhere. This issue can be resolved by applying a slightly tweaked version of RNNs – the Long Short-Term Memory Networks.

   

3. Improvement over RNN: LSTM (Long Short-Term Memory) Networks

When we arrange our calendar for the day, we prioritize our appointments right? If in case we need to make some space for anything important we know which meeting could be canceled to accommodate a possible meeting.

Turns out that an RNN doesn’t do so. In order to add a new information, it transforms the existing information completely by applying a function. Because of this, the entire information is modified, on the whole, i. e. there is no consideration for ‘important’ information and ‘not so important’ information.

LSTMs on the other hand, make small modifications to the information by multiplications and additions. With LSTMs, the information flows through a mechanism known as cell states. This way, LSTMs can selectively remember or forget things. The information at a particular cell state has three different dependencies.

We’ll visualize this with an example. Let’s take the example of predicting stock prices for a particular stock. The stock price of today will depend upon:

  1. The trend that the stock has been following in the previous days, maybe a downtrend or an uptrend.
  2. The price of the stock on the previous day, because many traders compare the stock’s previous day price before buying it.
  3. The factors that can affect the price of the stock for today. This can be a new company policy that is being criticized widely, or a drop in the company’s profit, or maybe an unexpected change in the senior leadership of the company.

These dependencies can be generalized to any problem as:

  1. The previous cell state (i.e. the information that was present in the memory after the previous time step)
  2. The previous hidden state (i.e. this is the same as the output of the previous cell)
  3. The input at the current time step (i.e. the new information that is being fed in at that moment)

Another important feature of LSTM is its analogy with conveyor belts!

That’s right!

Industries use them to move products around for different processes. LSTMs use this mechanism to move information around.

We may have some addition, modification or removal of information as it flows through the different layers, just like a product may be molded, painted or packed while it is on a conveyor belt.

The following diagram explains the close relationship of LSTMs and conveyor belts.

Source
 

Although this diagram is not even close to the actual architecture of an LSTM, it solves our purpose for now.

Just because of this property of LSTMs, where they do not manipulate the entire information but rather modify them slightly, they are able to forget and remember things selectively. How do they do so, is what we are going to learn in the next section?

 

4. Architecture of LSTMs

The functioning of LSTM can be visualized by understanding the functioning of a news channel’s team covering a murder story. Now, a news story is built around facts, evidence and statements of many people. Whenever a new event occurs you take either of the three steps.

Let’s say, we were assuming that the murder was done by ‘poisoning’ the victim, but the autopsy report that just came in said that the cause of death was ‘an impact on the head’. Being a part of this news team what do you do? You immediately forget the previous cause of death and all stories that were woven around this fact.

What, if an entirely new suspect is introduced into the picture. A person who had grudges with the victim and could be the murderer? You input this information into your news feed, right?

Now all these broken pieces of information cannot be served on mainstream media. So, after a certain time interval, you need to summarize this information and output the relevant things to your audience. Maybe in the form of “XYZ turns out to be the prime suspect.”.

Now let’s get into the details of the architecture of LSTM network:

 

 

Source

Now, this is nowhere close to the simplified version which we saw before, but let me walk you through it. A typical LSTM network is comprised of different memory blocks called cells
(the rectangles that we see in the image) There are two states that are being transferred to the next cell; the cell state and the hidden state. The memory blocks are responsible for remembering things and manipulations to this memory is done through three major mechanisms, called gates. Each of them is being discussed below.

4.1 Forget Gate

Taking the example of a text prediction problem. Let’s assume an LSTM is fed in, the following sentence:

 

As soon as the first full stop after “person” is encountered, the forget gate realizes that there may be a change of context in the next sentence. As a result of this, the subject of the sentence is forgotten and the place for the subject is vacated. And when we start speaking about “Dan” this position of the subject is allocated to “Dan”. This process of forgetting the subject is brought about by the forget gate.

A forget gate is responsible for removing information from the cell state. The information that is no longer required for the LSTM to understand things or the information that is of less importance is removed via multiplication of a filter. This is required for optimizing the performance of the LSTM network.

This gate takes in two inputs; h_t-1 and x_t.

h_t-1 is the hidden state from the previous cell or the output of the previous cell and x_t is the input at that particular time step. The given inputs are multiplied by the weight matrices and a bias is added. Following this, the sigmoid function is applied to this value. The sigmoid function outputs a vector, with values ranging from 0 to 1, corresponding to each number in the cell state. Basically, the sigmoid function is responsible for deciding which values to keep and which to discard. If a ‘0’ is output for a particular value in the cell state, it means that the forget gate wants the cell state to forget that piece of information completely. Similarly, a ‘1’ means that the forget gate wants to remember that entire piece of information. This vector output from the sigmoid function is multiplied to the cell state.

 

4.2 Input Gate

Okay, let’s take another example where the LSTM is analyzing a sentence:

Now the important information here is that “Bob” knows swimming and that he has served the Navy for four years. This can be added to the cell state, however, the fact that he told all this over the phone is a less important fact and can be ignored. This process of adding some new information can be done via the input gate.

Here is its structure:

 

The input gate is responsible for the addition of information to the cell state. This addition of information is basically three-step process as seen from the diagram above.

  1. Regulating what values need to be added to the cell state by involving a sigmoid function. This is basically very similar to the forget gate and acts as a filter for all the information from h_t-1 and x_t.
  2. Creating a vector containing all possible values that can be added (as perceived from h_t-1 and x_t) to the cell state. This is done using the tanh function, which outputs values from -1 to +1.  
  3. Multiplying the value of the regulatory filter (the sigmoid gate) to the created vector (the tanh function) and then adding this useful information to the cell state via addition operation.

 

Once this three-step process is done with, we ensure that only that information is added to the cell state that is important and is not redundant.

 

4.3 Output Gate

Not all information that runs along the cell state, is fit for being output at a certain time. We’ll visualize this with an example:

In this phrase, there could be a number of options for the empty space. But we know that the current input of ‘brave’, is an adjective that is used to describe a noun. Thus, whatever word follows, has a strong tendency of being a noun. And thus, Bob could be an apt output.

This job of selecting useful information from the current cell state and showing it out as an output is done via the output gate. Here is its structure:

 

The functioning of an output gate can again be broken down to three steps:

  1. Creating a vector after applying tanh function to the cell state, thereby scaling the values to the range -1 to +1.
  2. Making a filter using the values of h_t-1 and x_t, such that it can regulate the values that need to be output from the vector created above. This filter again employs a sigmoid function.
  3. Multiplying the value of this regulatory filter to the vector created in step 1, and sending it out as a output and also to the hidden state of the next cell.

The filter in the above example will make sure that it diminishes all other values but ‘Bob’. Thus the filter needs to be built on the input and hidden state values and be applied on the cell state vector.

 

5. Text generation using LSTMs

We have had enough of theoretical concepts and functioning of LSTMs. Now we would be trying to build a model that can predict some number of characters after the original text of Macbeth. Most of the classical texts are no longer protected under copyright and can be found here. An updated version of the .txt file can be found here.

We will use the library Keras, which is a high-level API for neural networks and works on top of TensorFlow or Theano. So make sure that before diving into this code you have Keras installed and functional.

Okay, so let’s generate some text!

 

  • Importing dependencies

# Importing dependencies numpy and keras

import numpy

from keras.models import Sequential

from keras.layers import Dense

from keras.layers import Dropout

from keras.layers import LSTM

from keras.utils import np_utils

We import all the required dependencies and this is pretty much self-explanatory.

  • Loading text file and creating character to integer mappings

# load text

filename = "/macbeth.txt"

 

text = (open(filename).read()).lower()

 

# mapping characters with integers

unique_chars = sorted(list(set(text)))

 

char_to_int = {}

int_to_char = {}

 

for i, c in enumerate (unique_chars):

    char_to_int.update({c: i})

    int_to_char.update({i: c})

The text file is open, and all characters are converted to lowercase letters. In order to facilitate the following steps, we would be mapping each character to a respective number. This is done to make the computation part of the LSTM easier.

  • Preparing dataset

# preparing input and output dataset

X = []

Y = []

 

for i in range(0, len(text) - 50, 1):

    sequence = text[i:i + 50]

    label =text[i + 50]

    X.append([char_to_int[char] for char in sequence])

    Y.append(char_to_int[label])

Data is prepared in a format such that if we want the LSTM to predict the ‘O’ in ‘HELLO’  we would feed in [‘H’, ‘E‘ , ‘L ‘ , ‘L‘ ] as the input and [‘O’] as the expected output. Similarly, here we fix the length of the sequence that we want (set to 50 in the example) and then save the encodings of the first 49 characters in X and the expected output i.e. the 50th character in Y.

  • Reshaping of X

# reshaping, normalizing and one hot encoding

X_modified = numpy.reshape(X, (len(X), 50, 1))

X_modified = X_modified / float(len(unique_chars))

Y_modified = np_utils.to_categorical(Y)

 

A LSTM network expects the input to be in the form [samples, time steps, features] where samples is the number of data points we have, time steps is the number of time-dependent steps that are there in a single data point, features refers to the number of variables we have for the corresponding true value in Y. We then scale the values in X_modified between 0 to 1 and one hot encode our true values in Y_modified.

 

  • Defining the LSTM model

# defining the LSTM model

model = Sequential()

model.add(LSTM(300, input_shape=(X_modified.shape[1], X_modified.shape[2]), return_sequences=True))

model.add(Dropout(0.2))

model.add(LSTM(300))

model.add(Dropout(0.2))

model.add(Dense(Y_modified.shape[1], activation='softmax'))

 

model.compile(loss='categorical_crossentropy', optimizer='adam')

A sequential model which is a linear stack of layers is used. The first layer is an LSTM layer with 300 memory units and it returns sequences. This is done to ensure that the next LSTM layer receives sequences and not just randomly scattered data. A dropout layer is applied after each LSTM layer to avoid overfitting of the model. Finally, we have the last layer as a fully connected layer with a ‘softmax’ activation and neurons equal to the number of unique characters, because we need to output one hot encoded result.

  • Fitting the model and generating characters

# fitting the model

model.fit(X_modified, Y_modified, epochs=1, batch_size=30)

 

# picking a random seed

start_index = numpy.random.randint(0, len(X)-1)

new_string = X[start_index]

 

# generating characters

for i in range(50):

    x = numpy.reshape(new_string, (1, len(new_string), 1))

    x = x / float(len(unique_chars))

 

    #predicting

    pred_index = numpy.argmax(model.predict(x, verbose=0))

    char_out = int_to_char[pred_index]

    seq_in = [int_to_char[value] for value in new_string]

    print(char_out)

 

    new_string.append(pred_index)

    new_string = new_string[1:len(new_string)]

The model is fit over 100 epochs, with a batch size of 30. We then fix a random seed (for easy reproducibility) and start generating characters. The prediction from the model gives out the character encoding of the predicted character, it is then decoded back to the character value and appended to the pattern.  

This is how the output of the network would look like

Eventually, after enough training epochs, it will give better and better results over the time. This is how you would use LSTM to solve a sequence prediction task.

 

End Notes

LSTMs are a very promising solution to sequence and time series related problems. However, the one disadvantage that I find about them, is the difficulty in training them. A lot of time and system resources go into training even a simple model. But that is just a hardware constraint! I hope I was successful in giving you a basic understanding of these networks. 

Source: Analyticsvidhya

3.  A Gentle Introduction to Exploding Gradients in Neural Networks

Exploding gradients are a problem where large error gradients accumulate and result in very large updates to neural network model weights during training.

This has the effect of your model being unstable and unable to learn from your training data.

In this post, you will discover the problem of exploding gradients with deep artificial neural networks.

After completing this post, you will know:

  • What exploding gradients are and the problems they cause during training.
  • How to know whether you may have exploding gradients with your network model.
  • How you can fix the exploding gradient problem with your network.

Let’s get started.

What Are Exploding Gradients?

An error gradient is the direction and magnitude calculated during the training of a neural network that is used to update the network weights in the right direction and by the right amount.

In deep networks or recurrent neural networks, error gradients can accumulate during an update and result in very large gradients. These in turn result in large updates to the network weights, and in turn, an unstable network. At an extreme, the values of weights can become so large as to overflow and result in NaN values.

The explosion occurs through exponential growth by repeatedly multiplying gradients through the network layers that have values larger than 1.0.

What Is the Problem with Exploding Gradients?

In deep multilayer Perceptron networks, exploding gradients can result in an unstable network that at best cannot learn from the training data and at worst results in NaN weight values that can no longer be updated.

… exploding gradients can make learning unstable.

— Page 282, Deep Learning, 2016.

In recurrent neural networks, exploding gradients can result in an unstable network that is unable to learn from training data and at best a network that cannot learn over long input sequences of data.

… the exploding gradients problem refers to the large increase in the norm of the gradient during training. Such events are due to the explosion of the long term components

— On the difficulty of training recurrent neural networks, 2013.

How do You Know if You Have Exploding Gradients?

There are some subtle signs that you may be suffering from exploding gradients during the training of your network, such as:

  • The model is unable to get traction on your training data (e.g. poor loss).
  • The model is unstable, resulting in large changes in loss from update to update.
  • The model loss goes to NaN during training.

If you have these types of problems, you can dig deeper to see if you have a problem with exploding gradients.

There are some less subtle signs that you can use to confirm that you have exploding gradients.

  • The model weights quickly become very large during training.
  • The model weights go to NaN values during training.
  • The error gradient values are consistently above 1.0 for each node and layer during training.

How to Fix Exploding Gradients?

There are many approaches to addressing exploding gradients; this section lists some best practice approaches that you can use.

1. Re-Design the Network Model

In deep neural networks, exploding gradients may be addressed by redesigning the network to have fewer layers.

There may also be some benefit in using a smaller batch size while training the network.

In recurrent neural networks, updating across fewer prior time steps during training, called truncated Backpropagation through time, may reduce the exploding gradient problem.

2. Use Rectified Linear Activation

In deep multilayer Perceptron neural networks, gradient exploding can occur given the choice of activation function, such as the historically popular sigmoid and tanh functions.

Exploding gradients can be reduced by using the rectified linear (ReLU) activation function.

Adopting the ReLU activation function is a new best practice for hidden layers.

3. Use Long Short-Term Memory Networks

In recurrent neural networks, gradient exploding can occur given the inherent instability in the training of this type of network, e.g. via Backpropagation through time that essentially transforms the recurrent network into a deep multilayer Perceptron neural network.

Exploding gradients can be reduced by using the Long Short-Term Memory (LSTM) memory units and perhaps related gated-type neuron structures.

Adopting LSTM memory units is a new best practice for recurrent neural networks for sequence prediction.

4. Use Gradient Clipping

Exploding gradients can still occur in very deep Multilayer Perceptron networks with a large batch size and LSTMs with very long input sequence lengths.

If exploding gradients are still occurring, you can check for and limit the size of gradients during the training of your network.

This is called gradient clipping.

Dealing with the exploding gradients has a simple but very effective solution: clipping gradients if their norm exceeds a given threshold.

— Section 5.2.4, Vanishing and Exploding Gradients, Neural Network Methods in Natural Language Processing, 2017.

Specifically, the values of the error gradient are checked against a threshold value and clipped or set to that threshold value if the error gradient exceeds the threshold.

To some extent, the exploding gradient problem can be mitigated by gradient clipping (thresholding the values of the gradients before performing a gradient descent step).

— Page 294, Deep Learning, 2016.

In the Keras deep learning library, you can use gradient clipping by setting the clipnorm or clipvalue arguments on your optimizer before training.

Good default values are clipnorm=1.0 and clipvalue=0.5.

5. Use Weight Regularization

Another approach, if exploding gradients are still occurring, is to check the size of network weights and apply a penalty to the networks loss function for large weight values.

This is called weight regularization and often an L1 (absolute weights) or an L2 (squared weights) penalty can be used.

Using an L1 or L2 penalty on the recurrent weights can help with exploding gradients

— On the difficulty of training recurrent neural networks, 2013.

In the Keras deep learning library, you can use weight regularization by setting the kernel_regularizer argument on your layer and using an L1 or L2 regularizer.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Books

Papers

Articles

Keras API

Summary

In this post, you discovered the problem of exploding gradients when training deep neural network models.

Specifically, you learned:

  • What exploding gradients are and the problems they cause during training.
  • How to know whether you may have exploding gradients with your network model.
  • How you can fix the exploding gradient problem with your network.

Source: Machinelearningmastery

 

If you’re interested in a career in Deep Learning call us at Hanson Regan on  +44 0208 290 4656

 

Beijing dominates China’s artificial intelligence landscape

Beijing dominates China’s artificial intelligence landscape

Zhong Guan Cun, the city's vast technology hub, has become the AI innovation highland of the country.

Zhong Guan Cun, the city's vast technology hub, has become the AI innovation highland of the country

There are 1,070 artificial intelligence companies in Beijing, accounting for 26% of the total number in China, according to the AI Development White Paper published by the Beijing Municipal Commission of Economy and Information Technology, The Paper reported.

As of May 8, the number of AI enterprises in China hit 4,040, while those with venture capital reached 1,237 — 35% of them based in Beijing.

Zhong Guan Cun, the technology hub in Beijing, has become the AI innovation highland of the country. But more than half of Beijing’s AI firms are still in the initial stage.

At least 29% of them are in A-round, followed by 6.7% in Pre-A round. About 18.5% and 2.7% have received funding from angel investors and seed investors, respectively.

Though Beijing is equipped with academic resources and a strong talent pool, the development of the AI industry still face problems, such as the lack of original innovation capacity when compared to US counterparts.

Also, the lack of high-end chips, key components and high-precision sensors may post a great challenge in the future to the development of the sector, the report says.

Source: ATimes

If you’re interested in a career in Artificial Intelligence call us at Hanson Regan on  +44 0208 290 4656

Combating hunger with artificial intelligence

Combating hunger with artificial intelligence

In order to improve world food conditions, a team around computer science professor Kristian Kersting was inspired by the technology behind Google News

In order to improve world food conditions, a team around computer science professor Kristian Kersting was inspired by the technology behind Google News.

Almost 800 million people worldwide suffer from malnutrition. In the future there could be around 9.7 billion people—around 2.2 billion more than today. Global demand for food will increase as climate change leaves more and more soil infertile. How should future generations feed themselves?

Kristian Kersting, Professor of Machine Learning at the Technische Universität Darmstadt, and his team see a potential solution in the application of artificial intelligence (AI). Machine learning, a special method of AI, could be the basis for so-called precision farming, which could be used to achieve higher yields on areas of equal or smaller size. The project is funded by the Federal Ministry of Food and Agriculture. Partners are the Institute of Crop Science and Resource Conservation (INRES) at the University of Bonn and the Aachen-based company Lemnatec.

"First of all, we want to understand what physiological processes in plants look like when they suffer from stress," said Kersting. "Stress occurs, for example, when plants do not absorb enough water or are infected with pathogens. Machine learning can help us to analyse these processes more precisely." This knowledge could be used to cultivate more resistant plants and to combat diseases more efficiently.

The researchers installed a hyperspectral camera that records a broadwave spectrum and provides deep insights into the plants. The more data available on the physiological processes of a plant during its growth cycle, the better a software is able to identify recurring patterns that are responsible for stress. However, too much data can be a problem, as the calculations become too complex. The researchers therefore need algorithms that use only part of the data for learning without sacrificing accuracy.

Kersting's team found a clever solution: To evaluate the data, the team used a highly advanced learning process from language processing, which is used, for example, at Google News. There, an AI selects the relevant articles for the reader from tens of thousands of new articles every day and sorts them by topic. This is done using probability models in which all words of a text are assigned to a specific topic. Kersting's trick was to treat the hyperspectral images of the camera like words: The software assigns certain image patterns to a topic such as the stress state of the plant.

The researchers are currently working on teaching the software to optimise itself using deep learning and to find the patterns that represent stress more quickly. "A healthy spot can for instance be identified from the chlorophyll content in the growth process of the plant," said Kersting. "When a drying process occurs, the measured spectrum changes significantly." The advantage of machine learning is that it can recognise such signs earlier than a human expert, as the software learns to pay attention to more subtleties.

It is hoped that someday, cameras can be installed along rows of plants on an assembly line in the greenhouse, allowing the software to point out abnormalities at any time. Through a constant exchange with plant experts, the system should also learn to identify even unknown pathogens. "Ultimately, our goal is a meaningful partnership between human and artificial intelligence, in order to address the growing problem of world nutrition," says Kersting.

Source: phys.org

If you’re interested in a career in Artificial Intelligence call us at Hanson Regan on  +44 0208 290 4656

Artificial intelligence footstep recognition system could be used for airport security

Artificial intelligence footstep recognition system could be used for airport security

The way you walk and your footsteps could be used as a biometric at airport security instead of fingerprinting and eye-scanning.

The way you walk and your footsteps could be used as a biometric at airport security instead of fingerprinting and eye-scanning.

Researchers at The University of Manchester in collaboration with the Universidad Autónoma de Madrid, Spain, have developed a state-of-the-art artificial intelligence (AI), biometric verification system that can measure a human’s individual gait or walking pattern. It can successfully verify an individual simply by them walking on a pressure pad in the floor and analysing the footstep 3D and time-based data.

The results, published in one of the top machine learning research journalsthe IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) earlier this year, showed that, on average, the AI system developed correctly identified an individual almost 100% of the time, with just a 0.7 error rate. 

Physical biometrics, such as fingerprints, facial recognition and retinal scans, are currently more commonly used for security purposes. However, so-called behavioural biometrics, such as gait recognition, also capture unique signatures delivered by a person’s natural behavioural and movement patterns. The team tested their data by using a large number of so-called ‘impostors’ and a small number of users in three different real-world security scenarios. These were airport security checkpoints, the workplace, and the home environment. 

Omar Costilla Reyes, from Manchester’s School of School of Electrical and Electronic Engineering, explains: “Each human has approximately 24 different factors and movements when walking, resulting in every individual person having a unique, singular walking pattern. Therefore monitoring these movements can be used, like a fingerprint or retinal scan, to recognise and clearly identify or verify an individual.”

To create the AI system that computers need to learn such movements patterns, the team used SfootBD, the largest footstep database in history (to date), containing nearly 20,000 footstep signals from 127 different individuals.

Omar added: “Focussing on non-intrusive gait recognition by monitoring the force exerted on the floor during a footstep is very challenging. That’s because distinguishing between the subtle variations from person to person is extremely difficult to define manually, that is why we had to come up with a novel AI system to solve this challenge from a new perspective.”

One the key benefits of using footprint recognition is, unlike being filmed or scanned at an airport, the process is non-intrusive for the individual and resilient to noise environmental conditions. The person doesn’t even need to remove their footwear when walking on the pressure pads as it isn’t based on the footprint shape itself but with their gait.

Other applications for the technology include smart steps that could recognise neuro degeneration which could have positive implications in the healthcare sector. This is another area that Omar intends to advance his research with footstep recognition.

He added: “The research is also being developed to address the healthcare problem of markers for cognitive decline and onset of mental illness, by using raw footstep data from a wide-area floor sensor deployable in smart dwellings. Human movement can be a novel biomarker of cognitive decline, which can be explored like never before with novel AI systems”

The research was also selected for the University’s Faculty of Science and Engineering (FSE) "in-abstract" a compendium of the very best new research coming from FSE.

Source: Manchester.ac.uk

If you’re interested in a career in Artificial Intelligence call us at Hanson Regan on  +44 0208 290 4656

Why Quantum Computing Should Be on Your Radar Now

Why Quantum Computing Should Be on Your Radar Now

Boston Consulting Group and Forrester are advising clients to get smart about quantum computing and start experimenting now so they can separate hype from reality.

Boston Consulting Group and Forrester are advising clients to get smart about quantum computing and start experimenting now so they can separate hype from reality.

There's a lot of chatter about quantum computing, some of which is false and some of which is true. For example, there's a misconception that quantum computers are going to replace classical computers for every possible use case, which is false. "Quantum computing" is not synonymous with "quantum leap," necessarily. Instead, quantum computing involves quantum physics which makes it fundamentally different than classical, binary computers. Binary computers can only process 1s and 0s. Quantum computers can process many more possibilities, simultaneously.

If math and physics scare you, a simple analogy (albeit not an entirely correct analogy) involves a light switch and a dimmer switch that represent a classical computer and a quantum computer, respectively. The standard light switch has two states: on and off. The dimmer switch provides many more options, including on, off, and range of states between on and off that are experienced as degrees of brightness and darkness. With a dimmer switch, a light bulb can be on, off, or a combination of both.

If math and physics do not scare you, quantum computing involves quantum superposition, which explains the nuances more eloquently.

One reason quantum computers are not an absolute replacement for classical computers has to do with their physical requirements. Quantum computers require extremely cold conditions in order for quantum bits or qubits to remain "coherent." For example, much of D-Wave's Input/Output (I/O) system must function at 15 millikelvin (mK), which is near absolute zero. 15 mK is equivalent to minus 273.135 degrees Celsius or minus 459.643 degrees Fahrenheit. By comparison, the classical computers most individuals own have built-in fans, and they may include heat sinks to dissipate heat. Supercomputers tend to be cooled with circulated water. In other words, the ambient operating environments required by quantum computers and classical computers vary greatly. Naturally, there are efforts that are aimed at achieving quantum coherence in room temperature conditions, one of which is described here.

Quantum computers and classical computers are fundamentally different tools. In a recent report, Brian Hopkins, vice president and principal analyst at Forresterexplained, "Quantum computing is a class of emerging hardware and software that exploits subatomic phenomenon to solve computationally hard problems."

What to expect, when

There's a lot of confusion about the current state of quantum computing which industry research firms Boston Consulting Group (BCG) and Forrester are attempting to clarify.

In the Forrester report, Hopkins estimates that quantum computing is in the early stages of commercialization, a stage that will persist through 2025 to 2030. The growth stage will begin at the end of that period and continue through the end of the forecast period which is 2050.

A recent BCG report estimates that quantum computing will become a $263 to $295 billion-dollar market given two different forecasting scenarios, both of which span 2025 to 2050. BCG also reasons that the quantum computing market will advance in three distinct phases:

  1. The first generation will be specific to applications that are quantum in nature, similar to what D-Wave is doing.
  2. The second generation will unlock what report co-author and BCG senior partner Massimo Russo calls "more interesting use cases."
  3. In the third generation, quantum computers will have achieved the number of logical qubits required to achieve Quantum Supremacy. (Note: Quantum Supremacy and logical qubits versus physical qubits are important concepts addressed below.)

"If you consider the number of logical qubits [required for problem-solving], it's going to take a while to figure out what use cases we haven't identified yet," said BCG's Russo. "Molecular simulation is closer. Pharma company interest is higher than in other industries."

Life sciences, developing new materials, manufacturing, and some logistics problems are ideal for quantum computers for a couple of possible reasons:

  • A quantum machine is more adept at solving quantum mechanics problems than classical computers, even when classical computers are able to simulate quantum computers
  • The nature of the problem is so difficult that it can't be solved using classical computers at all, or it can't be solved using classical computers within a reasonable amount of time, at a reasonable cost.

There are also hybrid use cases in which parts of a problem are best solved by classical computers and other parts of the problem are best solved by quantum computers. In this scenario, the classical computer breaks the problem apart, communicates with the quantum computer via an API, receives the result(s) from the quantum computer and then assembles a final answer to the problem, according to BCG's Russo.

"Think of it as a coprocessor that will address problems in a quantum way," he said.

While there is a flurry of quantum computing announcements at present, practically speaking, it may take a decade to see the commercial fruits of some efforts and multiple decades to realize the value of others.

Logical versus physical qubits

All qubits are not equal, which is true in two regards. First, there's an important difference between logical qubits and physical qubits. Second, the large vendors are approaching quantum computing differently, so their "qubits" may differ.

When people talk about quantum computers or semiconductors that have X number of qubits, they're referring to physical qubits. The reason the number of qubits matters is that the computational power grows exponentially with the addition of each, individual qubit. According to  Microsoft, a calculator is more powerful than a single qubit, and "simulating a 50-qubit quantum computation would arguably push the limits of existing supercomputers."

BCG's Russo said for semiconductors, the number of physical qubits required to create a logical qubit can be as high as 3,000:1. Forrester's Hopkins stated he's heard numbers ranging from 10,000 to 1 million or more, generally.

"No one's really sure," said Hopkins. "Microsoft thinks [it's] going to be able to achieve a 5X reduction in the number of physical qubits it takes to produce a logical qubit."  

The difference between physical qubits and logical qubits is extremely important because physical qubits are so unstable they need the additional qubits to ensure error correction and fault tolerance.

Get a grip on Quantum Supremacy

Quantum Supremacy does not signal the death of classical computers for the reasons stated above. Google cites this definition: "A critical question for the field of quantum computing in the near future is whether quantum devices without error correction can perform a well-defined computational task beyond the capabilities of state-of-the-art classical computers, achieving so-called quantum supremacy."

"We're not going to achieve Quantum Supremacy overnight, and we're not going to achieve it across the board," said Forrester's Hopkins. "Supremacy is a stepping stone to delivering a solution. Quantum Supremacy is going to be achieved domain by domain, so we're going to achieve Quantum Supremacy, which Google is advancing, and then Quantum Value, which IBM is advancing, in quantum chemistry or molecular simulation or portfolio risk management or financial arbitrage."

The fallacy is believing that Quantum Supremacy means that quantum computers will be better at solving all problems, ergo classical computers are doomed.

Given the proper definition of the term, Google is attempting to achieve Quantum Supremacy with its 72-qubit quantum processor, Bristlecone.

How to get started now

First, understand the fundamental differences between quantum computers and classical computers. This article is merely introductory, given its length.

Next, (before, after and simultaneously with the next piece of advice) find out what others are attempting to do with quantum computers and quantum simulations and consider what use cases might apply to your organization. Do not limit your thinking to what others are doing. Based on a fundamental understanding of quantum computing and your company's business domain, imagine what might be possible, whether the end result might be a minor percentage optimization that would give your company a competitive advantage or a disruptive innovation such as a new material.

Experimentation is also critical, not only to test hypotheses, but also to better understand how quantum computing actually works. The experimentation may inspire new ideas, and it will help refine existing ideas. From a business standpoint, don't forget to consider the potential value that might result from your work.

Meanwhile, if you want to get hands-on experience with a real quantum computer, try IBM Q. The "IBM Q Experience" includes user guides, interactive demos, the Quantum Composer which enables the creation of algorithms that run on real quantum computing hardware, and the QISKit software developer kit.

Also check out Quantum Computing Playground which is a browser-based WebGL Chrome experiment that features a GPU-accelerated quantum computer with a simple IDE interface, its own scripting language with debugging and 3D quantum state visualization features.

In addition, the Microsoft Quantum Development Kit Preview is available now. It includes the Q# language and compiler, the Q# standard library, a local quantum machine simulator, a trace quantum simulator which estimates the resources required to run a quantum program and Visual Studio extension.

Source: Informationweek

The software robot invasion is underway

The software robot invasion is underway

 Companies are adopting robotic process automation tools as they look to reduce errors and increase process efficiency.

One of the more disruptive emerging technologies, robotic process automation (RPA), appears primed for significant growth, despite the fact that many organizations remain confused or concerned about the impact these tools might have on their operations.

For some, RPA is seen as a technology designed to replace full-time human labor outright and therefore to be treated with caution. For others, it has the potential for huge cost savings and can enable enterprises to move people from mundane tasks such as data entry to more exciting endeavors.

Recent research indicates that there's a growing demand for RPA, which involves the use of software robots to handle any rules-based repetitive tasks quickly and cost effectively. And deploying the technology doesn't have to result in throwing a lot of people out of work.

"Interest and adoption of RPA has spiked dramatically across [the largest] organizations," said Tony Abel, a managing director with consulting firm Protiviti. "Organizations that have been dabbling in trials of other AI [artificial intelligence] technologies are realizing that to complete the vision of their digital transformation, they need to include an AI component that addresses their operational challenges."

Most organizations that have deployed RPA are looking to reduce errors and processing times and to integrate across expansive technology platforms, Abel said. "They're also looking to improve controls that both accelerate the existing audit process and anticipate greater complexity in audit processing in the future," he said.

Companies that are truly leveraging the value of RPA are doing so in such a way that improves their human capital position by replacing or enhancing activities currently performed by humans with robots, Abel said. Others are still reluctant to recognize the direct correlation between what a robot can do and what has historically been done by humans, he said. They are therefore hesitant to invest in the technology.

Clearly, these are still the early days of RPA implementation.

"Many organizations are still just getting started," Abel said. "They begin with a specific use case, usually by applying proof-of-concept bots in one small area of the business, whether that's supplier setup, system access provisioning, or invoice reconciliation."

Once they realize the value, they then look across the enterprise to other business processes that could reap the benefits of automation, Abel said.

"Another trend we are seeing is the use of robotics in delivery of services, particularly outsourced services," Abel said. "Also, the continual increase in labor rates in major off-shore locations is driving substitution of human labor for automation."

There are no guarantees of success. "We've seen a number of organizations that have stumbled with RPA implementations," Abel said. This usually occurs in large enterprises that are highly bureaucratic, he said.

Often several areas within an organization are running trials of one or multiple RPA products without fully committing or appropriately dedicating the time and skills necessary. "They are also not talking with one another," Abel said. "They have approached it with one foot out the door and become disillusioned with the results."

Disillusionment also comes when organizations are not able to reduce as much human capital as they had hoped. "Their business cases [and return on investment] was based purely on reducing headcount, which is a narrow way to view the value RPA can provide," Abel said. "The issues organizations are facing are a consequence of not having proper guidance and leadership in their RPA journey."

Source: ZDnet

Cyber security: Machine learning to be the main focus in 2018

Cyber security: Machine learning to be the main focus in 2018

Identified as one of this year’s biggest issues, machine learning has some very diverse applications in the world of cyber security.

In a landscape marked by an explosion in the number of security incidents, machine learning should be the main focal point in 2018. The promise of automated learning is of as much interest to hackers as it is to companies concerned with protecting their informational assets. The subject has even made it onto McAfee’s 2018 five most important trends in cyber security. 

Machine learning as a new battleground

Identified as one of this year’s biggest issues, machine learning has some very diverse applications in the world of cyber security. For example, it can be used to analyse the activities carried out by an authentication service so as to trigger an alert or block access when abnormal behaviour is detected. In this context, the system will study all the parameters of the attempted connection and seek to establish all the useful correlations that will allow it to decide whether it should, or shouldn’t, be authorised. Here, it’s the systems’ ability to collect and process large volumes of data in real time that gives the machine a form of intelligence.

On the other hand, attackers are not unaware of the benefits of this approach and are exploiting it by themselves to test the presence of vulnerabilities or to take their social engineering campaigns into business. Their work has given rise to new tools that can learn and adapt to exploit breaches more efficiently. We just need to wait and see which channels these attacks will take.

Other major trends in 2018

Marked by the wide-scale offensives such as WannaCry or BadRabbit, 2017 saw more than a 50% increase in the number of ransomware attacks. McAfee estimates that in 2018 hackers will likely carry out fewer but more targeted attacks, in order to maximise the chances of success. The market may, then, shift from a volume-based approach towards using more sophisticated tools and oriented towards the most lucrative victims. Smartphones will be amongst the new hottest targets.  

Particular attention should be paid to new applications being distributed by one or several Cloud providers following the “serverless” logic. This new way of using resources on demand is inducing new security risks: each new application used actually constitutes a new potential attack vector.

And for the last of these trends: the protection of private individuals faced with threats caused by the increase in personal data, particularly fostered by the wide accessibility of IoT. McAfee draws attention to two aspects of the phenomenon that need to be considered: firstly concerning all the deviations, particularly in marketing, that can come from exploiting this information by the manufacturers of the devices in question, despite the upcoming General Data Protection Regulation (GDPR).

As a corollary to the previous point, McAfee also underlines the often poorly-managed importance of consent given by the end-user of online services that involve personal data.

Of machines and men

Conclusion? Now more than ever, cyber security will be the concern of both machines and humans in 2018. Machines will have to learn how to come to terms with evermore sophisticated techniques of attack and defence. Humans, on the other hand, will have to learn how to manage how their information is used.

Source: Soprasteria

Quote of the Week

Quote of the Week

"Technology can be our best friend, and technology can also be the biggest party pooper of our lives. It interrupts our own story, interrupts our ability to have a thought or a daydream, to imagine something wonderful, because we're too busy bridging the walk from the cafeteria back to the office on the cell phone." 

Steven Spielberg
 

"Technology can be our best friend, and technology can also be the biggest party pooper of our lives. It interrupts our own story, interrupts our ability to have a thought or a daydream, to imagine something wonderful, because we're too busy bridging the walk from the cafeteria back to the office on the cell phone." 

Steven Spielberg

Artificial Intelligence will transform Universities

Artificial Intelligence will transform Universities

As AI surpasses human abilities in Go and poker – two decades after Deep Blue trounced chess grandmaster Garry Kasparov – it is seeping into our lives in ever more profound ways. It affects the way we search the web, receive medical advice and whether we receive finance from our banks.

Artificial Intelligence (AI) is a technology whose time has come.

As AI surpasses human abilities in Go and poker – two decades after Deep Blue trounced chess grandmaster Garry Kasparov – it is seeping into our lives in ever more profound ways. It affects the way we search the web, receive medical advice and whether we receive finance from our banks.

The most innovative AI breakthroughs, and the companies that promote them – such as DeepMind, Magic Pony, Aysadi, Wolfram Alpha and Improbable – have their origins in universities. Now AI will transform universities.

We believe AI is a new scientific infrastructure for research and learning that universities will need to embrace and lead, otherwise they will become increasingly irrelevant and eventually redundant.

Through their own brilliant discoveries, universities have sown the seeds of their own disruption. How they respond to this AI revolution will profoundly reshape science, innovation, education – and society itself.

Deep Mind was created by three scientists, two of whom met while working at University College London. Demis Hassabis, one of Deep Mind’s founders, who has a PhD in cognitive neuroscience from UCL and has undertaken postdoctoral studies at MIT and Harvard, is one of many scientists convinced that AI and machine learning will improve the process of scientific discovery.

It is already eight years since scientists at the University of Aberystwyth created a robotic system that carried out an entire scientific process on its own: formulating hypotheses, designing and running experiments, analysing data, and deciding which experiments to run next.

Complex data sets

Applied in science, AI can autonomously create hypotheses, find unanticipated connections, and reduce the cost of gaining insights and the ability to be predictive.

AI is being used by publishers such as Reed Elsevier for automating systematic academic literature reviews, and can be used for checking plagiarism and misuse of statistics. Machine learning can potentially flag unethical behaviour in research projects prior to their publication.

AI can combine ideas across scientific boundaries. There are strong academic pressures to deepen intelligence within particular fields of knowledge, and machine learning helps facilitate the collision of different ideas, joining the dots of problems that need collaboration between disciplines.

As AI gets more powerful, it will not only combine knowledge and data as instructed, but will search for combinations autonomously. It can also assist collaboration between universities and external parties, such as between medical research and clinical practice in the health sector.

The implications of AI for university research extend beyond science and technology.

Philosophical questions

In a world where so many activities and decisions that were once undertaken by people will be replaced or augmented by machines, profound philosophical questions arise about what it means to be human. Computing pioneer Douglas Engelbert – whose inventions include the mouse, windows and cross-file editing – saw this in 1962 when he wrote of “augmenting human intellect”.

Expertise in fields such as psychology and ethics will need to be applied to thinking about how people can more rewardingly work alongside intelligent machines and systems.

Research is needed into the consequences of AI on the levels and quality of employment and the implications, for example, for public policy and management.

When it comes to AI in teaching and learning, many of the more routine academic tasks (and least rewarding for lecturers), such as grading assignments, can be automated. Chatbots, intelligent agents using natural language, are being developed by universities such as the Technical University of Berlin; these will answer questions from students to help plan their course of studies.

Virtual assistants can tutor and guide more personalized learning. As part of its Open Learning Initiative (OLI), Carnegie Mellon University has been working on AI-based cognitive tutors for a number of years. It found that its OLI statistics course, run with minimal instructor contact, resulted in comparable learning outcomes for students with fewer hours of study. In one course at the Georgia Institute of Technology, students could not tell the difference between feedback from a human being and a bot.

Global classroom

Mixed reality and computer vision can provide a high-fidelity, immersive environment to stimulate interest and understanding. Simulations and games technology encourage student engagement and enhance learning in ways that are more intuitive and adaptive. They can also engage students in co-developing knowledge, involving them more in university research activities. The technologies also allow people outside of the university and from across the globe to participate in scientific discovery through global classrooms and participative projects such as Galaxy Zoo.

As well as improving the quality of education, AI can make courses available to many more people. Previously access to education was limited by the size of the classroom. With developments such as Massive Open Online Courses (MOOCs) over the last five years, tens of thousands of people can learn about a wide range of university subjects.

It still remains the case, however, that much advanced learning, and its assessment, requires personal and subjective attention that cannot be automated. Technology has ‘flipped the classroom’, forcing universities to think about where we can add real value – such as personalised tuition, and more time with hands-on research, rather than traditional lectures.

Monitoring performance

University administrative processes will benefit from utilising AI on the vast amounts of data they produce during their research and teaching activities. This can be used to monitor performance against their missions, be it in research, education or promotion of diversity, and can be produced frequently to assist more responsive management. It can enhance the quality of performance league tables, which are often based on data with substantial time lags. It can allow faster and more efficient applicant selection.

AI allows the tracking of individual student performance, and universities such as Georgia State and Arizona State are using it to predict marks and indicate when interventions are needed to allow students to reach their full potential and prevent them from dropping out.

Such data analytics of students and staff raises weighty questions about how to respect privacy and confidentiality, that require judicious codes of practice.

The blockchain is being used to record grades and qualifications of students and staff in an immediately available and incorruptible format, helping prevent unethical behaviour, and could be combined with AI to provide new insights into student and career progression.

Universities will need to be attuned to the new opportunities AI produces for supporting multidisciplinarity. In research this will require creating new academic departments and jobs, with particular demands for data scientists. Curricula will need to be responsive, educating the scientists and technologists who are creating and using AI, and preparing students in fields as diverse as medicine, accounting, law and architecture, whose future work and careers will depend on how successfully they ally their skills with the capabilities of machines.

New curricula should allow for the unpredictable path of AI’s development, and should be based on deep understanding, not on the immediate demands of companies.

Addressing the consequences

Universities are the drivers of disruptive technological change, like AI and automation. It is the duty of universities to reflect on their broader social role, and create opportunities that will make society resilient to this disruption.

We must address the consequences of technological unemployment, and universities can help provide skills and opportunities for people whose jobs have been adversely affected.

There is stiff competition for people skilled in the development and use of AI, and universities see many of their talented staff attracted to work in the private sector. One of the most pressing AI challenges for universities is the need for them to develop better employment conditions and career opportunities to retain and incentivize their own AI workers. They need to create workplaces that are flexible, agile and responsive to interactions with external sources of ideas, and are open to the mixing of careers as people move between universities and business.

The fourth industrial revolution is profoundly affecting all elements of contemporary societies and economies. Unlike the previous revolutions, where the structure and organization of universities were relatively unaffected, the combinations of technologies in AI is likely to shake them to their core. The very concept of ‘deep learning’, central to progress in AI, clearly impinges on the purpose of universities, and may create new competition for them.

If done right, AI can augment and empower what universities already do; but continuing their missions of research, teaching and external engagement will require fundamental reassessment and transformation. Are universities up to the task?

Source: Weforum

Deep Sea exploring with Lasers & big data

Deep Sea exploring with Lasers & big data

Advances in computing power and smart data tools are allowing scientists to build amazing high-resolution maps of the ocean floor

Advances in computing power and smart data tools are allowing scientists to build amazing high-resolution maps of the ocean floor

We currently know more about the surface of Mars than we do about our planet’s ocean floor. This seems even more ridiculous when you consider that the oceans cover 71 percent of the Earth. They also play a vital role in providing food and fresh air (ocean plants produce half of the world's oxygen), as well as shaping our weather and climate.

Of course, “most people think the bottom of the ocean is like a giant bathtub filled with mud — boring, flat and dark,” said oceanographer Robert D. Ballard1, the man who discovered the wreck of the Titanic. “But it contains the largest mountain range on earth, canyons far grander than the Grand Canyon and towering vertical cliffs rising up three miles — more than twice the height of Yosemite’s celebrated El Capitan.”

Which begs the question: what else lies hidden in the deep?

With only 5 percent of the ocean floor mapped in any real detail, there’s undoubtedly much more to discover. But it’s a mammoth task. The Seabed 2030 project will spend the next 13 years systematically depth-logging 140 million square miles of ocean, with the goal of leaving no feature larger than 100 metres unmapped.

Even today, there’s no shortage of bottom topography (aka bathymetry) data. Scientists can draw information from ships, ROVs, buoys and satellites, with measurements taken using a combination of multibeam sonar, Lidar and laser altimetry.

The challenge isn’t gathering the data, it’s making sense of it all.

 

 

   

Enter big data analytics. Advances in computing power and smart data tools are allowing scientists to build high-resolution maps from a variety of different sources. For example, National ICT Australia (NICTA) and the University of Sydney used big data analytics and AI to convert 15,000 seafloor sediment samples into a unique digital map2.

The Black Sea Maritime Archaeological Project (MAP), meanwhile, might have started laser-mapping the 168,500 square mile inland sea to study the effects of climate change3. But the data ultimately revealed over 60 undiscovered shipwrecks spanning 2,500 years of maritime history. Finds included vessels from the Roman, Byzantine and Ottoman periods.

Big data analytics isn’t just helping to map the world’s oceans. It’s becoming instrumental in how we monitor and protect them. The technology is already being used to regulate fishing and to provide real-time data for optimising ship routes. It can also be used to track water temperature and flow to predict extreme weather events based on historical simulations.

“Exploration and mapping, and making the data open source, would be for the betterment of all citizens,” Ballard told The Smithsonian Magazine1. “Not just in economic terms but in opportunities for unexpected discoveries.”

Big data analytics has the potential to see patterns in vast reams of data, crunching the numbers to provide analysis and insight. Armed with this information, we can better understand our oceans, sustain and protect them. In doing so, we can have a positive effect on the overall health of our planet.

Source: Intel

What is Machine Learning?

What is Machine Learning?

Typing “what is machine learning?” into a Google search opens up a pandora’s box of forums, academic research, and here-say – and the purpose of this article is to simplify the definition and understanding of machine learning thanks to the direct help from our panel of machine learning researchers.

Typing “what is machine learning?” into a Google search opens up a pandora’s box of forums, academic research, and here-say – and the purpose of this article is to simplify the definition and understanding of machine learning thanks to the direct help from our panel of machine learning researchers.

In addition to an informed, working definition of machine learning (ML), we aim to provide a succinct overview of the fundamentals of machine learning, the challenges and limitations of getting machine to ‘think’, some of the issues being tackled today in deep learning (the ‘frontier’ of machine learning), and key takeaways for developing machine learning applications.

This article will be broken up into the following sections:

  • What is machine learning?
  • How we arrived at our definition (IE: the perspective of expert researchers)
  • Machine learning basic concepts
  • Visual representation of ML models
  • How we get machines to learn
  • An overview of the challenges and limitations of ML
  • Brief introduction to deep learning

We put together this resource to help with whatever your area of curiosity about machine learning – so scroll along to your section of interest, or feel free to read the article in order, starting with our machine learning definition below:

What is Machine Learning?

* “Machine Learning is the science of getting computers to learn and act like humans do, and improve their learning over time in autonomous fashion, by feeding them data and information in the form of observations and real-world interactions.”

The above definition encapsulates the ideal objective or ultimate aim of machine learning, as expressed by many researchers in the field. The purpose of this article is to provide a business-minded reader with expert perspective on how machine learning is defined, and how it works. References and related researcher interviews are included at the end of this article for further digging.

* How We Arrived at Our Definition:

(Our aggregate machine learning definition can be found at the beginning of this article)

As with any concept, machine learning may have a slightly different definition, depending on whom you ask. We combed the Internet to find five practical definitions from reputable sources:

  1. “Machine Learning at its most basic is the practice of using algorithms to parse data, learn from it, and then make a determination or prediction about something in the world.” – Nvidia 
  2. “Machine learning is the science of getting computers to act without being explicitly programmed.” – Stanford
  3. “Machine learning is based on algorithms that can learn from data without relying on rules-based programming.”- McKinsey & Co.
  4. “Machine learning algorithms can figure out how to perform important tasks by generalizing from examples.” – University of Washington
  5. “The field of Machine Learning seeks to answer the question “How can we build computer systems that automatically improve with experience, and what are the fundamental laws that govern all learning processes?” – Carnegie Mellon University

We sent these definitions to experts whom we’ve interviewed and/or included in one of our past research consensuses, and asked them to respond with their favorite definition or to provide their own. Our introductory definition is meant to reflect the varied responses. Below are some of their responses:

Dr. Yoshua Bengio, Université de Montréal:

ML should not be defined by negatives (thus ruling 2 and 3). Here is my definition:

Machine learning research is part of research on artificial intelligence, seeking to provide knowledge to computers through data, observations and interacting with the world. That acquired knowledge allows computers to correctly generalize to new settings.

Dr. Danko Nikolic, CSC and Max-Planck Institute:

(edit of number 2 above): “Machine learning is the science of getting computers to act without being explicitly programmed, but instead letting them learn a few tricks on their own.”

Dr. Roman Yampolskiy, University of Louisville:

Machine Learning is the science of getting computers to learn as well as humans do or better.

Dr. Emily Fox, University of Washington: 

My favorite definition is #5.

Machine Learning Basic Concepts

There are many different types of machine learning algorithms, with hundreds published each day, and they’re typically grouped by either learning style (i.e. supervised learning, unsupervised learning, semi-supervised learning) or by similarity in form or function (i.e. classification, regression, decision tree, clustering, deep learning, etc.). Regardless of learning style or function, all combinations of machine learning algorithms consist of the following:

  • Representation (a set of classifiers or the language that a computer understands)
  • Evaluation (aka objective/scoring function)
  • Optimization (search method; often the highest-scoring classifier, for example; there are both off-the-shelf and custom optimization methods used)


Image credit: Dr. Pedro Domingo, University of Washington

The fundamental goal of machine learning algorithms is to generalize beyond the training samples i.e. successfully interpret data that it has never ‘seen’ before.

Visual Representations of Machine Learning Models

Concepts and bullet points can only take one so far in understanding. When people ask “What is machine learning?”, they often want to see what it is and what it does. Below are some visual representations of machine learning models, with accompanying links for further information. Even more resources can be found at the bottom of this article.
How We Get Machines to Learn

There are different approaches to getting machines to learn, from using basic decision trees to clustering to layers of artificial neural networks (the latter of which has given way to deep learning), depending on what task you’re trying to accomplish and the type and amount of data that you have available.

While emphasis is often placed on choosing the best learning algorithm, researchers have found that some of the most interesting questions arise out of none of the available machine learning algorithms performing to par. Most of the time this is a problem with training data, but this also occurs when working with machine learning in new domains.

Research done when working on real applications often drives progress in the field, and reasons are twofold: 1. Tendency to discover boundaries and limitations of existing methods 2. Researchers and developers working with domain experts and leveraging time and expertise to improve system performance.

Sometimes this also occurs by “accident.” We might consider model ensembles, or combinations of many learning algorithms to improve accuracy, to be one example. Teams competing for the 2009 Netflix Price found that they got their best results when combining their learners with other team’s learners, resulting in an improved recommendation algorithm (read Netflix’s blog for more on why they didn’t end up using this ensemble).

One important point (based on interviews and conversations with experts in the field), in terms of application within business and elsewhere, is that machine learning is not just, or even about, automation, an often misunderstood concept. If you think this way, you’re bound to miss the valuable insights that machines can provide and the resulting opportunities (rethinking an entire business model, for example, as has been in industries like manufacturing and agriculture).

Machines that learn are useful to humans because, with all of their processing power, they’re able to more quickly highlight or find patterns in big (or other) data that would have otherwise been missed by human beings. Machine learning is a tool that can be used to enhance humans’ abilities to solve problems and make informed inferences on a wide range of problems, from helping diagnose diseases to coming up with solutions for global climate change.

Challenges and Limitations

“Machine learning can’t get something from nothing…what it does is get more from less.” – Dr. Pedro Domingo, University of Washington

The two biggest, historical (and ongoing) problems in machine learning have involved overfitting (in which the model exhibits bias towards the training data and does not generalize to new data, and/or variance i.e. learns random things when trained on new data) and dimensionality (algorithms with more features work in higher/multiple dimensions, making understanding the data more difficult). Having access to a large enough data set has in some cases also been a primary problem.

One of the most common mistakes among machine learning beginners is testing training data successfully and having the illusion of success; Domingo (and others) emphasize the importance of keeping some of the data set separate when testing models, and only using that reserved data to test a chosen model, followed by learning learning on the whole data set.

When a learning algorithm (i.e. learner) is not working, often the quicker path to success is to feed the machine more data, the availability of which is by now well-known as a primary driver of progress in machine and deep learning algorithms in recent years; however, this can lead to issues with scalability, in which we have more data but time to learn that data remains an issue.

In terms of purpose, machine learning is not an end or a solution in and of itself. Furthermore, attempting to use it as a blanket solution i.e. “BLANK” is not a useful exercise; instead, coming to the table with a problem or objective is often best driven by a more specific question – “BLANK”.

Deep Learning and Modern Developments in Neural Networks

Deep learning involves the study and design of machine algorithms for learning good representation of data at multiple levels of abstraction (ways of arranging computer systems). Recent publicity of deep learning through DeepMind, Facebook, and other institutions has highlighted it as the “next frontier” of machine learning.

The International Conference on Machine Learning (ICML) is widely regarded as one of the most important in the world. This year’s took place in June in New York City, and it brought together researchers from all over the world who are working on addressing the current challenges in deep learning:

  1. Unsupervised learning in small data sets
  2. Simulation-based learning and transferability to the real world

Deep-learning systems have made great gains over the past decade in domains like bject detection and recognition, text-to-speech, information retrieval and others. Research is now focused on developing data-efficient machine learning i.e. deep learning systems that can learn more efficiently, with the same performance in less time and with less data, in cutting-edge domains like personalized healthcare, robot reinforcement learning, sentiment analysis, and others.

Key Takeaways in Applying Machine Learning

Below is a selection of best-practices and concepts of applying machine learning that we’ve collated from our interviews for out podcast series, and from select sources cited at the end of this article. We hope that some of these principles will clarify how ML is used, and how to avoid some of the common pitfalls that companies and researchers might be vulnerable to in starting off on an ML-related project.

  • Arguably the most important factor in successful machine learning projects is the features used to describe the data (which are domain-specific), and having adequate data to train your models in the first place
  • Most of the time when algorithms don’t perform well, it’s due a to a problem with the training data (i.e. insufficient amounts/skewed data; noisy data; or insufficient features describing the data for making decisions
  • “Simplicity does not imply accuracy” – there is (according to Domingo) no given connection between number of parameters of a model and tendency to overfit
  • Obtaining experimental data (as opposed to observational data, over which we have no control) should be done if possible (for example, data gleaned from sending different variations of an email to a random audience sampling)
  • Whether or not we label data causal or correlative, the more important point is to predict the effects of our actions 
  • Always set aside a portion of your training data set for cross validation; you want your chosen classifier or learning algorithm to perform well on fresh data

Source: Techemergence

The Digital Twin effect: Four ways it can revitalise your business

The Digital Twin effect: Four ways it can revitalise your business

Enterprises across the globe are embracing digital twins to revitalize their businesses. By 2021, half the world’s large industrial companies will rely on this innovative technology to gain additional insight around their products, assets, processes, operations, and more.

Enterprises across the globe are embracing digital twins to revitalize their businesses. By 2021, half the world’s large industrial companies will rely on this innovative technology to gain additional insight around their products, assets, processes, operations, and more.

Here are four specific ways digital twins can benefit your enterprise:

Enable data-driven decision making

Creating a digital twin involves building a comprehensive digital representation of the many components of a physical object, from outer features to the software inside. Companies develop digital twins by attaching Internet of Things (IoT) sensors to their products, assets, or equipment.

Building digital twins will give you digitalized versions of bills of materials, 2D drawings, and 3D models. More importantly, you’ll have an accurate view of how your devices are operating in real time.

This data empowers you to make better decisions. If your manufacturing equipment is lagging, you can fix or upgrade the machinery before it impacts your company’s efficiency. If a product is under performing, you can make improvements so future releases don’t have similar issues.

Automate business processes

On top of providing greater connectivity between your company and its products, digital twins help your enterprise better connect with its business processes.

Real-time data makes it possible for you to spot and put an end to business-process inefficiencies. But combining real-time data with historical data and machine learning capabilities in a digital twin allows you to predict problems and automatically resolve them.

To the naked eye, an asset may be operating as expected. Inside the machine, however, it’s another story. A glitch in the system is causing your asset to gradually slow down. Five days from now, it’ll fail completely.

Without the right technology, you’d never know that. But digital twins help you anticipate issues and prevent problems before they even occur. They enable you to detect anomalies and automate repair processes at the first sign of weakness. And by coming to your asset’s rescue sooner rather than later, you can avoid serious service interruption or prolonged downtime.

Increase collaboration

IoT keeps data flowing. Digital twins allow you to access this wealth of data in real time. But you don’t have to keep all that data to yourself. In fact, you'd be wise to share it.

Creating a digital twin network makes it easy to share data with internal colleagues, external supply chain partners, and even customers. With access to the same insight, you, your partners, and your customers can collaboratively improve products, processes, and more.

Sharing digital twin data with multiple internal departments ensures everyone’s always on the same page. Your R&D, finance, marketing, and sales teams – groups that typically work in silos – can collaborate to ensure your new product is properly designed, accurately priced, sufficiently promoted, and commercially viable.

Supply chain partners benefit from a network of digital twins with enhanced visibility. If an asset malfunctions, your maintenance provider knows it needs to mobilize a team to fix the equipment. If your company manufactures a product ahead of schedule, your logistics provider knows it can pick up the goods and deliver them early.

Finally, digital twin networks help you glean invaluable insight from your customers. By monitoring how customers interact with your goods, you can remove underused features from future product iterations or develop new products that highlight popular features.

Enabling an open, collaborative environment through a network of digital twins offers you the chance to transform engineering, operations, and everything else in between.

Create new business models

No enterprise is immune to industry-altering disruption. That’s why companies must constantly look for new ways to re-imagine existing business models and generate revenue.

Digital twins present an opportunity to do both.

Say you manufacture compressed air supply systems. In addition to selling your equipment and installing it at your customer’s site, you offer to maintain it throughout the asset life cycle and charge fees based on air consumption rather than a fixed rate.

With a digital twin network you share with your customer, you can monitor the condition of your asset around the clock and accurately track how much air your customer consumes. This reliable and transparent method ensures you’re always standing by to repair the asset, if necessary, and charging the proper amount of money each billing cycle.

Thinking outside the box and exploring innovative as-a-service business models is a surefire way to remain profitable in today’s ever-evolving digital world.

Digital Twin in Action

Here are two shining examples of companies winning with this exciting, new technology:

Stara

This Brazil-based tractor manufacturer, uses digital twins to modernize farming.

By outfitting its tractors with IoT sensors, the company can increase equipment performance. With real-time visibility into how its tractors operate, Stara can proactively prevent equipment malfunctions and improve asset uptime.

The company has also leveraged digital twins to create new business models. Stara launched a profitable new service that provides farmers with real-time insight detailing the optimal conditions for planting crops and improving farm yield.

Farmers have reduced seed use by 21% and fertilizer use by 19% thanks to Stara’s guidance.

Kaeser

This manufacturer of compressed air products, used digital twins to go from merely selling a product to selling a service.

Instead of installing equipment at a customer’s site and leaving operation to the customer, Kaeser maintains the asset throughout its lifecycle and charges fees based on air consumption rather than a fixed rate.

A digital twin network enables the company to monitor the condition of its equipment around the clock and measure customer air consumption. Real-time asset data helps Kaeser ensure equipment uptime and charge an accurate amount of money each billing cycle.

To date, the company has cut commodity costs by 30% and onboarded 50% of major vendors using digital twins.

 Replicating your business for the better

Digital twins give you the ability to enable data-driven decision making, automate business processes, increase collaboration, and create new business models. They help you improve partner collaboration so you can meet evolving customer demands efficiently and cost-effectively.

Source: Forbes

Will AI help cybersecurity or the hackers?

Will AI help cybersecurity or the hackers?

Like in any battle, the ability to harness new technologies can be a decisive factor in victory. In cybersecurity, that new technology is artificial intelligence, and it will benefit both sides.

 

Like in any battle, the ability to harness new technologies can be a decisive factor in victory. In cybersecurity, that new technology is artificial intelligence, and it will benefit both sides.

Source: Mashable

Will AI bring a new renaissance?

Will AI bring a new renaissance?

Artificial intelligence is becoming the fastest disruptor and generator of wealth in history. It will have a major impact on everything. Over the next decade, more than half of the jobs today will disappear and be replaced by AI and the next generation of robotics.

Artificial intelligence is becoming the fastest disruptor and generator of wealth in history. It will have a major impact on everything. Over the next decade, more than half of the jobs today will disappear and be replaced by AI and the next generation of robotics.

AI has the potential to cure diseases, enable smarter cities, tackle many of our environmental challenges, and potentially redefine poverty. There are still many questions to ask about AI and what can go wrong. Elon Musk recently suggested that under some scenarios AI could jeopardise human survival. 

AI's ability to analyse data and its accuracy is enormous. This will enable the development of smarter machines for business.

But at what cost and how will we control it? Society needs to seriously rethink AI's potentials, its impact to both our society and the way we live.

Artificial intelligence and robotics were initially thought to be a danger to be blue-collar jobs, but that is changing with white-collar workers – such as lawyers and doctors – who carry out purely quantitative analytical processes are also becoming an endangered species. Some of their methods and procedures are increasingly being replicated and replaced by software.

For instance, researchers at MIT's Computer Science and Artificial Intelligence Laboratory, Massachusetts General Hospital and Harvard Medical School developed a machine learning model to better detect cancer.

They trained the model on 600 existing high-risk lesions, incorporating parameters like, family history, demographics, and past biopsies. It was then tested on 335 lesions and they found it could predict the status of a lesion which 97 per cent accuracy, ultimately enabling the researchers to upgrade those lesions to cancer.

Traditional mammograms uncover suspicious lesions, then test their findings with a needle biopsy. Abnormalities would undergo surgeries, usually resulting in 90 per cent to be benign, rendering the procedures unnecessary. As the amount of data and other potential variables are considered, human clinicians cannot compete at the same level of AI.

So will AI take the clinicians job or will it just provide a better diagnostic tool, freeing up the clinicians to provide better connection with their patients?

Confusion around the various terminologies relating to AI can warp the conversation. Artificial general intelligence (AGI) is where machines can successfully perform any intellectual task that a human can do - sometimes referred to as “strong AI”, or “full AI”. That is where a machine can perform “general intelligent actions”.

Max Tegmark in his recent book Life 3.0, describes AI as a machine or computer that displays intelligence. This contrasts with natural intelligence which you and I and other animals display. Research of AI is the study of intelligent agents; devices which sense their own environment and take actions to maximise its chances of success.

Tegmark refers to Life 3.0 as a representation of our current stage of evolution. Life 1.0 referred to biological origins, or our hardware, which has been controlled by the process of evolution.

Life 2.0 is our cultural development of humanity. This refers to our software, which drives us and our minds. Education and knowledge has been a major influence on this stage of our journey, constantly being updated and upgraded. These versions of Life are based on survival of the fittest, our education and time.

Life 3.0 is the technological age of humanity. We have effectively reached the point where we can upgrade our hardware and software. Not to the levels of the movies, it may be possible in the future but that might be a while away. All these upgrades have been due to our use of technology, advanced materials and drugs that improve our bodies.

The first renaissance

This was a period between the 14th and 17th centuries. The Renaissance encompassed innovative flowering of Latin and vernacular literatures, beginning with a resurgence of learning based on classical sources. Theories proposed to account for its origins a characteristic, focusing on a variety of factors including the social and civic peculiarities.

Renaissance literally means ‘"rebirth’, a cultural movement that profoundly affected European intellectual life. This period was a time of exploration of many changes in society. People were able to ask and explore their questions.

A ‘Renaissance man was a person who is skilled in multiple disciplines, someone who has a broad base of knowledge. These people pursued multiple fields of studies. A good example of a Renaissance man of this period was Leonardo Da Vinci, a Master of Art, engineering, anatomy as well as many other disciplines with remarkable success. The Renaissance man shows skills in many matters.

Einstein was a genius of theoretical physics, but he was not necessarily a Renaissance man. In the past, universities students were encouraged to study the liberal arts. The idea being to give a more rounded education.

It is not the case that many of these students are polymaths. That of having a broad-based education would lead to a more developed mind. As indicated by Daniel Pink, a Whole New Mind, the Master of Fine Arts will become the MBA of the future.

The new renaissance

AI is going to free us from many ardours duties around what we do for work. Businesses that have embraced these changes will grow, others will go. Robotics and AI are starting to have major social and cultural impacts.

We are seeing more protests technology, people becoming activists. The inequality of pay to work is impacting many people. Taxi drivers are affected by Uber, hotels by Airbnb and many more, the rules have changed, and many are not happy. This situation draws a close parallel to the cottage industries of the industrial age. That impact brought the rise of the luddites that were led by John Lud.

The disenfranchised workers faced with innovation, industrial level of change and the destruction of their industry rings true today as it did for the luddites in the industrial age.

“Recently, the term neo-Luddism has emerged to describe opposition to many forms of technology. According to a manifesto drawn up by the Second Luddite Congress in 1996, neo-Luddism is “a leaderless movement of passive resistance to consumerism and the increasingly bizarre and frightening technologies of the Computer Age.” (Wikipedia)

We need to take this time as an opportunity to create a new Renaissance period, enabling more of us to become ‘Renaissance people’, using our creativity and innovative traits. Innovation is what businesses wants but computers struggle to master.

Jobs of the future will come from this aspect of humanity, but if we are not looking, we ignore the situation the neo-Luddites may have a point. Potentially creating a comparable situation as when the then luddites started to break the industrial looms.

This was criminalised in 1721, leading to the Frame Breaking Act of 1812 and the death penalty. Not to say we will get that far, but there are some already building their camps and weaponising themselves for just that eventuality.

So, what can we do?

We need to talk about AI and the future. We need to realise that the impacts are going to be eminence - that we need to plan. Jobs are and will change so you need to prepare. Innovation is a top priority for many organisations.

It can no longer be left to the realm of the geeks and techies. We all need to be more innovative and creative, it must increase exponentially and become a core competency. Innovation is a matter of a change in mindset, developing the right environment and circumstances.

We need to ask more questions, to find the right answer. This is an important skill that many have forgotten or lost. We can find many answers on Google but, without the right question they are worthless.

We need to explore the process of doing just that, asking the right question to achieve the right outcomes.

Get ready for AI and the future because the future is NOW!

Source: Cio

AI in banking

AI in banking

Artificial intelligence is a new approach to information discovery and decision-making. Inspired by the way the human brain processes information, draws conclusions, and codifies instincts and experiences into learning, it is able to bridge the gap between the intent of big data and the reality of practical decision-making.

Artificial intelligence is a new approach to information discovery and decision-making. Inspired by the way the human brain processes information, draws conclusions, and codifies instincts and experiences into learning, it is able to bridge the gap between the intent of big data and the reality of practical decision-making. Artificial Intelligence (AI), machine learning systems, and natural language processing are now no longer experimental concepts but potential business disrupters that can drive insights to aid real-time decision making. Each week there are new advancements, new technologies, new applications, and new opportunities in AI. It’s inspiring, but also overwhelming. That’s why I created this guide to help you keep pace with all these exciting developments. Whether you’re currently employed in the banking industry, working with Produvia or just pursuing an interest in the subject, there will always be something here to inspire you.

Today, banks and financial servicing companies must embrace artificial intelligence technologies in order to improve business engagement, automation, insights and strategies.

AI Ideas for Banking

There are many opportunities for artificial intelligence in the banking industry. Here are a few AI ideas to consider:

  1. Intelligent Mortgage Loan Approvals
    Imagine technology that pulls third-party data to verify applicant’s identity, determines whether the bank can offer pre-approval on the basis of a partial application, estimates property value, creates document files for title validation and flood certificate searches, determines loan terms on the basis on risk scoring, develops a strategy to improve conversation, provides real-time text and voice support via chatbot. (BCG, 2017) Imagine a system that approves mortgage loans by comparing the applicant’s finances with data for existing loan holders. Imagine software that calculates mortgage risk based on wide range of loan-level characteristics at origination (credit score, loan-to-value ratio, product type and features), as well as a number of variables describing loan performance (e.g., number of times delinquent in past year), several time-varying factors that describe the economic conditions a borrower faces, including both local variables such as housing prices, average incomes, and foreclosure rates at the zip code level, as well as national-level variables such as mortgage rates. (Justin Sirignano, 2016)
  2. Risk Management
    Imagine software that gains intelligence from various data sources such as credit scores, financial data, spending patterns. (FinExtra, 2017) Imagine technology that identifies a risk score of a customer based on his or her nationality, occupation, salary range, experience, industry he or she works for, and credit history. (Quora, 2017)
  3. Fraud Detection
    Imagine technology that establishes patterns based on the historical behaviour of account owners. When uncharacteristic transactions occur, an alert is generated indicating the possibility of fraud. (FinExtra, 2017) Imagine software that can detect fraudulent patterns by analyzing historical transaction data. (FeedzaiNymiZolozBiocatch)
    Imagine a system that detects suspicious transactions, voice recognition software that confirms the identity of a bank customer whose credit card information has been stolen, and cognitive-automation technology that recommends an action — perhaps via a chatbot — to that customer. (GCG, 2017) Imagine software that detects financial fraud using anomaly detection.
  4. Credit Risk Management
    Imagine software that allows for more accurate, instant credit decisions by analyzing news and business networks. This system can also be used to improve Early Warning Systems (EWS) and to provide mitigation recommendations. (Accenture, 2017)
  5. Risk and Finance Reporting
    Imagine Robotic Process Automation (RPA) which allows a business to map out simple, rule-based processes and have a computer carry them out on their behalf. Imagine a program that reads and understands unstructured data or text and makes subjective decisions in response, similar to a human. This system enables banks to meet regulatory reporting requirements at speed, whilst reducing costs. (Accenture, 2017)
  6. Customer Service Chatbot
    Imagine a banking chatbot that understands customer behaviour, tracks spending patterns and tailors recommendations on how to manage finances. Imagine a chatbot that helps customers perform routine banking transactions while offering simple insights on improving finance management. Imagine a bot that curates targeted offers and promotes relevant products and services, thereby increasing customer satisfaction. (FinExtra, 2017)
  7. Customer Engagement
    Imagine technology that improves customer understanding and activation through personalization, influencing desired actions. (Deloitte, 2017)
  8. Banking Automation
    Imagine software that automates repetitive, knowledge & natural language rich, human intensive decision processes. (Deloitte, 2017)
  9. Banking Insights
    Imagine technology that determines key patterns and relationships from billions of data sources in real-time to derive deep and actionable insights. (Deloitte, 2017)
  10. Shape Strategies
    Imagine software that builds a deep understanding of company, market dynamics, and disruptive trends to shape strategies. (Deloitte, 2017)
  11. Predict Cash at ATMs
    Imagine an algorithm that predicts the cash required at each of its ATMs across the country, combining this with route-optimization techniques to save money. (McKinsey, 2017)
  12. Detect Anti-Money Laundering (AML) Activity
    Imagine technology that detects anti-money laundering (AML) activity by tracing the true source of money and identifying disguised illegal cash flow. (FinExtra, 2017)
  13. Know-Your-Customer Checks
    Imagine technology provides continuous monitoring of transactions and is able to better identify if a particular transaction is worthy of follow up investigation, given the systems analytics of historical transaction patterns and behaviors. (Medium, 2017)

Practical AI In Banking

There are many banks that are now incorporating artificial intelligence technologies. Here are a few of our favourites:

  1. In Europe, more than a dozen banks have replaced older statistical- modeling approaches with machine-learning techniques and, in some cases, experienced 10 percent increases in sales of new products, 20 percent savings in capital expenditures, 20 percent increases
    in cash collections, and 20 percent declines in churn. The banks have achieved these gains by devising new recommendation engines
    for clients in retailing and in small and medium-sized companies. They have also built microtargeted models that more accurately forecast who will cancel service or default on their loans, and how best to intervene. (McKinsey, 2015)
  2. In Canada, a major Canadian Bank reduced watch list checks from 12 hours to less than 15 minutes, increased name checks from 2,500 to more than 40,000, reduced false positives by 75%, and realized ROI in 3 months. (IBM, 2017)
  3. A South American Bank improved efficiency by 60% by reducing administrative costs. They also reduced AML alerts by 90% which in turn increased accuracy by 60%. (IBM, 2017)
Back to the core of intelligence

Back to the core of intelligence

Two decades ago I (José Hernández-Orallo) started working on metrics of machine intelligence. By that time, during the glacial days of the second AI winter, few were really interested in measuring something that AI lacked completely.

Two decades ago I (José Hernández-Orallo) started working on metrics of machine intelligence. By that time, during the glacial days of the second AI winter, few were really interested in measuring something that AI lacked completely. And very few, such as David L. Dowe and I, were interested in metrics of intelligence linked to algorithmic information theory, where the models of interaction between an agent and the world were sequences of bits, and intelligence was formulated using Solomonoff’s and Wallace’s theories of inductive inference.

In the meantime, seemingly dozens of variants of the Turing test were proposed every year, the CAPTCHAs were introduced and David showed how easy it is to solve some IQ tests using a very simple program based on a big-switch approach. And, today, a new AI spring has arrived, triggered by a blossoming machine learning field, bringing a more experimental approach to AI with an increasing number of AI benchmarks and competitions (see a previous entry in this blog for a survey).

Considering this 20-year perspective, last year was special in many ways. The first in a series of workshops on evaluating general-purpose AI took off, echoing the increasing interest in the assessment of artificial general intelligence (AGI) systems, capable of finding diverse solutions for a range of tasks. Evaluating these systems is different, and more challenging, than the traditional task-oriented evaluation of specific systems, such as a robotic cleaner, a credit scoring model, a machine translator or a self-driving car. The idea of evaluating general-purpose AI systems using videogames had caught on. The arcade learning environment (the Atari 2600 games) or the more flexible Video Game Definition Language and associated competition became increasingly popular for the evaluation of AGI and its recent breakthroughs.

Last year also witnessed the introduction of a different kind of AI evaluation platforms, such as Microsoft’s Malmö, GoodAI’s School, OpenAI’s Gym and Universe, DeepMind’s Lab, Facebook’s TorchCraft and CommAI-env. Based on a reinforcement learning (RL) setting, these platforms make it possible to create many different tasks and connect RL agents through a standard interface. Many of these platforms are well suited for the new paradigms in AI, such as deep reinforcement learning and some open-source machine learning libraries. After thousands of episodes or millions of steps against a new task, these systems are able to excel, with usually better than human performance.

Despite the myriads of applications and breakthroughs that have been derived from this paradigm, there seems to be a consensus in the field that the main open problem lies in how an AI agent can reuse the representations and skills from one task to new ones, making it possible to learn a new task much faster, with a few examples, as humans do. This can be seen as a mapping problem (usually under the term transfer learning) or can be seen as a sequential problem (usually under the terms gradual, cumulative, incremental, continual or curriculum learning).

One of the key notions that is associated with this capability of a system of building new concepts and skills over previous ones is usually referred to as “compositionality”, which is well documented in humans from early childhood. Systems are able to combine the representations, concepts or skills that have been learned previously in order to solve a new problem. For instance, an agent can combine the ability of climbing up a ladder with its use as a possible way out of a room, or an agent can learn multiplication after learning addition.

In my opinion, two of the previous platforms are better suited for compositionality: Malmö and CommAI-env. Malmö has all the ingredients of a 3D game, and AI researchers can experiment and evaluate agents with vision and 3D navigation, which is what many research papers using Malmö have done so far, as this is a hot topic in AI at the moment. However, to me, the most interesting feature of Malmö is building and crafting, where agents must necessarily combine previous concepts and skills in order to create more complex things.

CommAI-env is clearly an outlier in this set of platforms. It is not a video game in 2D or 3D. Video or audio don’t have any role there. Interaction is just produced through a stream of input/output bits and rewards, which are just +1, 0 or -1. Basically, actions and observations are binary. The rationale behind CommAI-env is to give prominence to communication skills, but it still allows for rich interaction, patterns and tasks, while “keeping all further complexities to a minimum”.

When I was aware that the General AI Challenge was using CommAI-env for their warm-up round I was ecstatic. Participants could focus on RL agents without the complexities of vision and navigation. Of course, vision and navigation are very important for AI applications, but they create many extra complications if we want to understand (and evaluate) gradual learning. For instance, two equal tasks for which the texture of the walls changes can be seen as requiring higher transfer effort than two slightly different tasks with the same texture. In other words, this would be extra confounding factors that would make the analysis of task transfer and task dependencies much harder. It is then a wise choice to exclude this from the warm-up round. There will be occasions during other rounds of the challenge for including vision, navigation and other sorts of complex embodiment. Starting with a minimal interface to evaluate whether the agents are able to learn incrementally is not only a challenging but an important open problem for general AI.

Also, the warm-up round has modified CommAI-env in such a way that bits are packed into 8-bit (1 byte) characters. This makes the definition of tasks more intuitive and makes the ASCII coding transparent to the agents. Basically, the set of actions and observations is extended to 256. But interestingly, the set of observations and actions is the same, which allows many possibilities that are unusual in reinforcement learning, where these subsets are different. For instance, an agent with primitives such as “copy input to output” and other sequence transformation operators can compose them in order to solve the task. Variables, and other kinds of abstractions, play a key role.

 

This might give the impression that we are back to Turing machines and symbolic AI. In a way, this is the case, and much in alignment to Turing’s vision in his 1950 paper: “it is possible to teach a machine by punishments and rewards to obey orders given in some language, e.g., a symbolic language”. But in 2017 we have a range of techniques that weren’t available just a few years ago. For instance, Neural Turing Machines and other neural networks with symbolic memory can be very well suited for this problem.

By no means does this indicate that the legion of deep reinforcement learning enthusiasts cannot bring their apparatus to this warm-up round. Indeed they won’t be disappointed by this challenge if they really work hard to adapt deep learning to this problem. They won’t probably need a convolutional network tuned for visual pattern recognition, but there are many possibilities and challenges in how to make deep learning work in a setting like this, especially because the fewer examples, the better, and deep learning usually requires many examples.

As a plus, the simple, symbolic sequential interface opens the challenge to many other areas in AI, not only recurrent neural networks but techniques from natural language processing, evolutionary computation, compression-inspired algorithms or even areas such as inductive programming, with powerful string-handling primitives and its appropriateness for problems with very few examples.

I think that all of the above makes this warm-up round a unique competition. Of course, since we haven’t had anything similar in the past, we might have some surprises. It might happen that an unexpected (or even naïve) technique could behave much better than others (and humans) or perhaps we find that no technique is able to do something meaningful at this time.

I’m eager to see how this round develops and what the participants are able to integrate and invent in order to solve the sequence of micro and mini-tasks. I’m sure that we will learn a lot from this. I hope that machines will, too. And all of us will move forward to the next round!

Source: Medium

History of Chatbots

History of Chatbots

Are you familiar with the Turing Test? For the uninitiated, the Turing Test was developed by Alan Turing, the original computer nerd, in 1950. The idea is simple: for a machine to pass the Turing Test, it must exhibit intelligent behavior indistinguishable from that of a human being.

Turing Test

Are you familiar with the Turing Test? For the uninitiated, the Turing Test was developed by Alan Turing, the original computer nerd, in 1950. The idea is simple: for a machine to pass the Turing Test, it must exhibit intelligent behavior indistinguishable from that of a human being.

The test is usually conceptualized with one person—the interrogator—speaking through a computerized interface with two different entities, hidden from view. One is an actual computer, one is a human being. If the interrogator is unable to determine which is which, the computer has passed the Turing Test.

Despite experts working on this problem for nearly seventy years, machines able to even approach success at the Turing Test have been rare. However, not being able to strictly pass the Turing Test doesn’t mean these systems—what we call chatbots today—are useless. They can handle simple tasks like taking food orders, answering basic customer support questions and offering suggestions based on a request (like Siri and Alexa). They serve an important and growing role in our society, and it’s worth looking at how they’ve developed to this point.

 

ELIZA

The first true chatbot was called ELIZA, developed in the mid-1960s by Joseph Weizenbaum at MIT. On a basic level, its design allowed it to converse through pattern matching and substitution. In the same way someone can listen to you, then offer a response that involves an idea you didn’t specifically mention (“Where should we eat?” “I like that Thai place on the corner.”), ELIZA was programmed to understand patterns of human communication and offer responses that included the same type of substitutions. This gave the illusion that ELIZA understood the conversation.

The most famous version of ELIZA used the DOCTOR script. This allowed it to simulate a Rogerian psychotherapist, and even today it gives responses oddly similar to what we might find in a therapy session—it responds to inputs by trying to draw more information out of the speaker, rather than offer concrete answers. By modern standards, we can tell the conversation goes off the rails quickly, but its ability to maintain a conversation for as long as it does is impressive when we remember it was programmed using punch cards.

 

PARRY

The next noteworthy chatbot came relatively soon afterward, in 1972. Sometimes referred to as “ELIZA with attitude”, PARRY simulated the thinking of a paranoid person or paranoid schizophrenic. It was designed by a psychiatrist, Kenneth Colby, who had become disenchanted with psychoanalysis due to its inability to generate enough reliable data to advance the science.

Colby believed computer models of the mind offered a more scientific approach to the study of mental illness and cognitive processes overall. After joining the Stanford Artificial Intelligence Laboratory, he used his experience in the psychiatric field to program PARRY, a chatbot that mimicked a paranoid individual—it consistently misinterpreted what people said, assumed they had nefarious motives, were always lying, and could not be allowed to inquire into certain aspects of PARRY’s “life”. While ELIZA was never expected to mimic human intelligence—although it did occasionally fool people—PARRY was a much more serious attempt at creating an artificial intelligence, and in the early 1970s, it became the first machine pass a version of the Turing Test.

 

Dr. Sbaitso and A.L.I.C.E.

The 1990s saw the advent of two more important chatbots. First was a chatbot designed to actually speak to you: Dr. Sbaitso. Although similar to previous chatbots, with improved pattern recognition and substitution programming, Dr. Sbaitso became known for its weird digitized voice that sounded not at all human, yet did a remarkable job of speaking with correct inflection and grammar. Later, in 1995, A.L.I.C.E. came along, inspired by ELIZA. Its heuristic matching patterns proved a substantial upgrade on previous chatbots; although it never passed a true Turing Test, upgrades to A.L.I.C.E.’s algorithm made it a Loebner Prize winner in 2000, 2001, and 2004.

 

Speaking of the Loebner Prize

Since the invention of ELIZA and PARRY, chatbot technology has continued to improve; however, the most notable contribution of the last thirty years has arguably come in the form of the Loebner Prize. Instituted in 1991, the annual competition awards prizes to the most human-like computer programs, continuing to the present day. Initially the competition required judges to have highly restricted conversations with the chatbots, which led to a great deal of critique; for example, the rules initially required judges to limit themselves to “whimsical conversation”, which played directly into the odd responses often generated by chatbots. Time limits also worked against truly testing the bots, as only so many questions could be asked in five minutes or fewer given the less-than-instant response speeds inherent in computers of that era. One critic, Marvin Minsky, even offered a prize in 1995 to anyone who could stop the competition.

However, the restrictions of the early years were soon lifted, and from the mid-1990s on there have been no limitations placed on what the judges discuss with the bots. Chatbot technology improves every year in part thanks to the Loebner Prize, as programmers chase a pair of one-time awards that have yet to be won. The first is $25,000 for the first program that judges cannot distinguish from a human to the extent that it convinces judges the human is the computer. The other is $100,000 for the first program to pass a stricter Turing Test, where it can decipher and understand not just text, but auditory and visual input as well. Pushing AI development to be capable of this was part of Loebner’s goal in starting the competition; as such, once the $100,000 prize is claimed, the competition will end.

Siri and Alexa

Of course, as important as these goals are, chatbots have been developed with other goals in mind. Siri and Alexa, for example, are artificial intelligences and make no attempt to fool us otherwise; Apple and Amazon, respectively, improve them by enhancing their ability to find relevant answers to our questions. In addition, many of us are familiar with Watson, the computer that competed on Jeopardy! It works not by attempting to be human, but by processing natural language and using that “understanding” to find more and more information online. The process proved very successful—in 2011, Watson beat a pair of former Jeopardy! champions.

We should also note that not all chatbot experiments are successful. The most recent failure, and certainly the most high-profile was Tay, Microsoft’s Twitter-based chatbot. The intent was for Tay to interact with Twitter users and learn how to communicate with them. Unfortunately, in less than a day, Tay’s primary lesson was how to be incredibly racist, and Microsoft shut down the account.

Even in that negative instance, however, the technology showed it was definitely capable of learning. In the case of Tay, and anyone else seeking to create something similar, the next task is to work on how to filter bad lessons, or tightly control its learning sources. More broadly speaking, all of these examples show how chatbots have evolved, continue to evolve, and are certainly something we should expect to see more and more in the coming years and decades.

Source: Chatbotpack

The AI skills crisis & how to close the gap

The AI skills crisis & how to close the gap

 Now that nearly every company is considering how artificial intelligence (AI) applications can positively impact their businesses, they are on the hunt for professionals to help them make their vision a reality. According to research done by Glassdoor, data scientists have the No. 1 job in the United States. The survey looked at salary, job satisfaction and the number of job openings. If you have recent experience looking for AI specialists to join your team, it’s quite clear that we’re facing an AI skills crisis. In order to move AI projects from ideation into implementation, companies will need to determine how to close the AI skills gap so they have experts on their team to get the job done.

 Now that nearly every company is considering how artificial intelligence (AI) applications can positively impact their businesses, they are on the hunt for professionals to help them make their vision a reality. According to research done by Glassdoor, data scientists have the No. 1 job in the United States. The survey looked at salary, job satisfaction and the number of job openings. If you have recent experience looking for AI specialists to join your team, it’s quite clear that we’re facing an AI skills crisis. In order to move AI projects from ideation into implementation, companies will need to determine how to close the AI skills gap so they have experts on their team to get the job done.

 

Factors that contribute to the AI talent shortage

One report suggested there about 300,000 AI professionals worldwide, but millions of roles available. While these are speculative figures, the competitive salaries and benefits packages and the aggressive recruiting tactics rolled out by firms to recruit AI talent would suggest the supply of AI talent is nowhere near matching up to the demand.

As the democratization of AI and deep learning applications expands—possible not just for tech giants but now viable for small- and medium-sized businesses—the demand for AI professionals to do the work has ballooned as well. The C-suite and corporate management’s excitement for AI’s various applications is building and then once they have bought into the concept (which is happening much more rapidly), they want to make it real right away.

The 2018 “How Companies Are Putting AI to Work Through Deep Learning” survey from O’Reilly reveals the AI skills gap is the largest barrier to AI adoption, although data challenges, company culture, hardware and other company resources are also impediments. These results parallel a recent Ernst & Young poll that confirmed 56% of senior AI professionals believed the lack of qualified AI professionals was the single biggest barrier to AI implementation across business operations.

Another reason for the AI skills crisis is that our academic and training programs just can’t keep up with the pace of innovation and new discoveries with AI. Not only do AI professionals need official training, they need on-the-job experience. Therefore, there aren’t enough experienced AI professionals to step into the leadership roles required by organizations who are just beginning to adopt AI strategies into their operations.

Source: Forbes

Blockchain technology: “We aspire to make the EU the leading player

Blockchain technology: “We aspire to make the EU the leading player

Blockchain technology is increasingly being used for anything from crypto currencies to casting votes. Parliament is working on a public policy to stimulate its development.

Blockchain technology is increasingly being used for anything from crypto currencies to casting votes. Parliament is working on a public policy to stimulate its development.

Blockchain technology is based on digital ledgers, public records that can be used and shared simultaneously. The technology is probably best known as being the basis for Bitcoin and other crypto currencies, but it is also used in many other sectors, ranging from creative industries to public services.

 

MEPs now want to help create a public policy that supports the development of blockchain and other related technologies.

 

"Disruptive element"

 

Greek S&D member Eva Kaili has written a resolution, which was adopted by Parliament's  on 16 May. In it she call for “open-minded, progressive and innovation-friendly regulation”.

 

However, the MEP warned that the technology could lead to significant changes. “Blockchain and distributed ledger Technologies in general have a strong disruptive element that will affect many sectors," she said. "Financial services is just one." The resolution also looked at the effects of the technologies leading to fewer intermediaries in other sectors such as energy, health care, education, creative industries as well as the public sector.

 

Kaili.is also the chair of the Science and Technology Options Assessment panel, which provides MEPs with independent, high-quality and scientifically impartial studies and information to help assess the impact of new technologies.

Making the EU the leading player

 

The EU has an important role to play in cultivating this technology, said Kaili.  “We aspire to make EU the leading player in the field of blockchain," she said. "We experience a strong entrepreneurial interest in blockchain. We, as regulators, need to make sure that all this effort will be embraced by the necessary institutional and legal certainty."

 

Another concern is the impact the technology could have on people and their data. Kaili said that as technology evolves, the risks do too. “It is not smart to regulate the technology per se, but rather its uses and the sectors that adopt this technology in their business models. Consumer protection and investor protection come first.”

 

Investment

 

The EU has already been promoting the technology. For example, it has already invested more than €80 million in projects supporting the use of blockchain. The European Commission has said around €300 million more will be allocated by 2020.

 

In addition the Commission launched the EU Blockchain Observatory and Forum in February 2018.

  

Next steps

 

All MEPs will have the opportunity to vote on the resolution during an upcoming plenary session. If adopted, the resolution will be forwarded to the European Commission for consideration.

 Source: Europarl

Artificial Intelligence helps to predict the likelihood of life on other planets

Artificial Intelligence helps to predict the likelihood of life on other planets

Developments in artificial intelligence may help us to predict the probability of life on other planets, according to new work by a team based at Plymouth University. The study uses artificial neural networks (ANNs) to classify planets into five types, estimating a probability of life in each case, which could be used in future interstellar exploration missions. The work is presented at the European Week of Astronomy and Space Science (EWASS) in Liverpool on 4 April by Mr Christopher Bishop.

Developments in artificial intelligence may help us to predict the probability of life on other planets, according to new work by a team based at Plymouth University. The study uses artificial neural networks (ANNs) to classify planets into five types, estimating a probability of life in each case, which could be used in future interstellar exploration missions. The work is presented at the European Week of Astronomy and Space Science (EWASS) in Liverpool on 4 April by Mr Christopher Bishop.

Artificial neural networks are systems that attempt to replicate the way the human brain learns. They are one of the main tools used in machine learning, and are particularly good at identifying patterns that are too complex for a biological brain to process.

The team, based at the Centre for Robotics and Neural Systems at Plymouth University, have trained their network to classify planets into five different types, based on whether they are most like the present-day Earth, the early Earth, Mars, Venus or Saturn's moon Titan. All five of these objects are rocky bodies known to have atmospheres, and are among the most potentially habitable objects in our Solar System.

Mr Bishop comments, "We're currently interested in these ANNs for prioritising exploration for a hypothetical, intelligent, interstellar spacecraft scanning an exoplanet system at range."

He adds, "We're also looking at the use of large area, deployable, planar Fresnel antennas to get data back to Earth from an interstellar probe at large distances. This would be needed if the technology is used in robotic spacecraft in the future."

Atmospheric observations -- known as spectra -- of the five Solar System bodies are presented as inputs to the network, which is then asked to classify them in terms of the planetary type. As life is currently known only to exist on Earth, the classification uses a 'probability of life' metric which is based on the relatively well-understood atmospheric and orbital properties of the five target types.

Bishop has trained the network with over a hundred different spectral profiles, each with several hundred parameters that contribute to habitability. So far, the network performs well when presented with a test spectral profile that it hasn't seen before.

"Given the results so far, this method may prove to be extremely useful for categorising different types of exoplanets using results from ground-based and near Earth observatories" says Dr Angelo Cangelosi, the supervisor of the project.

The technique may also be ideally suited to selecting targets for future observations, given the increase in spectral detail expected from upcoming space missions such ESA's Ariel Space Mission and NASA's James Webb Space Telescope.

Source: Sciencedaily

Deep Learning comes full circle

Deep Learning comes full circle

Artificial intelligence drew much inspiration from the human brain but went off in its own direction. Now, AI has come full circle and is helping neuroscientists better understand how our own brains work.

 

Artificial intelligence drew much inspiration from the human brain but went off in its own direction. Now, AI has come full circle and is helping neuroscientists better understand how our own brains work.

For years, the people developing artificial intelligence drew inspiration from what was known about the human brain, and it has enjoyed a lot of success as a result. Now, AI is starting to return the favor.

Although not explicitly designed to do so, certain artificial intelligence systems seem to mimic our brains’ inner workings more closely than previously thought, suggesting that both AI and our minds have converged on the same approach to solving problems. If so, simply watching AI at work could help researchers unlock some of the deepest mysteries of the brain.

“There’s a real connection there,” said Daniel Yamins, assistant professor of psychology. Now, Yamins, who is also a faculty scholar of the Stanford Neurosciences Institute and a member of Stanford Bio-X, and his lab are building on that connection to produce better theories of the brain – how it perceives the world, how it shifts efficiently from one task to the next and perhaps, one day, how it thinks.

A vision problem for AI

Artificial intelligence has been borrowing from the brain since its early days, when computer scientists and psychologists developed algorithms called neural networks that loosely mimicked the brain. Those algorithms were frequently criticized for being biologically implausible – the “neurons” in neural networks were, after all, gross simplifications of the real neurons that make up the brain. But computer scientists didn’t care about biological plausibility. They just wanted systems that worked, so they extended neural network models in whatever way made the algorithm best able to carry out certain tasks, culminating in what is now called deep learning.

Then came a surprise. In 2012, AI researchers showed that a deep learning neural network could learn to identify objects in pictures as well as a human being, which got neuroscientists wondering: How did deep learning do it?

The same way the brain does, as it turns out. In 2014, Yamins and colleagues showed that a deep learning system that had learned to identify objects in pictures – nearly as well as humans could – did so in a way that closely mimicked the way the brain processes vision. In fact, the computations the deep learning system performed matched activity in the brain’s vision-processing circuits substantially better than any other model of those circuits.

Around the same time, other teams made similar observations about parts of the brain’s vision– and movement-processing circuits, suggesting that given the same kind of problem, deep learning and the brain had evolved similar ways of coming up with a solution. More recently, Yamins and colleagues have demonstrated similar observations in the brain’s auditory system.

On one hand, that’s not a big surprise. Although the technical details differ, deep learning’s conceptual organization is borrowed directly from what neuroscientists already knew about the organization of neurons in the brain.

But the success of Yamins and colleagues’ approach and others like it depends equally as much on another, more subtle choice. Rather than try to get the deep learning system to directly match what the brain does at the level of individual neurons, as many researchers had done, Yamins and colleagues simply gave their deep learning system the same problem: Identify objects in pictures. Only after it had solved that problem did the researchers compare how deep learning and the brain arrived at their solutions – and only then did it become clear that their methods were essentially the same.

“The correspondence between the models and the visual system is not entirely a coincidence, because one directly inspired the other,” said Daniel Bear, a postdoctoral researcher in Yamins’ group, “but it’s still remarkable that it’s as good a correspondence as it is.”

One likely reason for that, Bear said, is natural selection and evolution. “Basically, object recognition was a very evolutionarily important task” for animals to solve – and solve well, if they wanted to tell the difference between something they could eat and something that could eat them. Perhaps trying to do that as well as humans and other animals do – except with a computer – led researchers to find essentially the same solution.

Seek what the brain seeks

Whatever the underlying reason, insights gleaned from the 2014 study led to what Yamins calls goal-directed models of the brain: Rather than try to model neural activity in the brain directly, instead train artificial intelligence to solve problems the brain needs to solve, then use the resulting AI system as a model of the brain. Since 2014, Yamins and collaborators have been refining the original goal-directed model of the brain’s vision circuits and extending the work in new directions, including understanding the neural circuits that process inputs from rodents’ whiskers.

In perhaps the most ambitious project, Yamins and postdoctoral fellow Nick Haber are investigating how infants learn about the world around them through play. Their infants – actually relatively simple computer simulations – are motivated only by curiosity. They explore their worlds by moving around and interacting with objects, learning as they go to predict what happens when they hit balls or simply turn their heads. At the same time, the model learns to predict what parts of the world it doesn’t understand, then tries to figure those out.

While the computer simulation begins life – so to speak – knowing essentially nothing about the world, it eventually figures out how to categorize different objects and even how to smash two or three of them together. Although direct comparisons with babies’ neural activity might be premature, the model could help researchers better understand how infants use play to learn about their environments, Haber said.

On the other end of the spectrum, models inspired by artificial intelligence could help solve a puzzle about the physical layout of the brain, said Eshed Margalit, a graduate student in neurosciences. As the vision circuits in infants’ brains develop, they form specific patches – physical clusters of neurons – that respond to different kinds of objects. For example, humans and other primates all form a face patch that is active almost exclusively when they look at faces.

Exactly why the brain forms those patches, Margalit said, isn’t clear. The brain doesn’t need a face patch to recognize faces, for example. But by building on AI models like Yamins’ that already solve object recognition tasks, “we can now try to model that spatial structure and ask questions about why the brain is laid out this way and what advantages it might give an organism,” Margalit said.

Closing the loop

There are other issues to tackle as well, notably how artificial intelligence systems learn. Right now, AI needs much more training – and much more explicit training – than humans do in order to perform as well on tasks like object recognition, although how humans succeed with so little data remains unclear.

A second issue is how to go beyond models of vision and other sensory systems. “Once you have a sensory impression of the world, you want to make decisions based on it,” Yamins said. “We’re trying to make models of decision making, learning to make decisions and how you interface between sensory systems, decision making and memory.” Yamins is starting to address those ideas with Kevin Feigelis, a graduate student in physics, who is building AI models that can learn to solve many different kinds of problems and switch between tasks as needed, something very few AI systems are able to do.

In the long run, Yamins and the other members of his group said all of those advances could feed into more capable artificial intelligence systems, just as earlier neuroscience research helped foster the development of deep learning. “I think people in artificial intelligence are realizing there are certain very good next goals for cognitively inspired artificial intelligence,” Haber said, including systems like his that learn by actively exploring their worlds. “People are playing with these ideas.”

Source: Stanford

The impact of Big data on supply chain

The impact of Big data on supply chain

You receive a notification on your phone that a critical shipment from your China factory has missed its filing deadline with the customs broker. Your logistics manager is alerted that there is an 80% chance that the components he’s waiting for are likely to be delayed another 48 hours by excessive port traffic and your GTM software advises diverting the shipment to an alternate port facility. Your compliance officer is informed that there is a 95% chance that a shipment of parts from Malaysia is likely to be held for up to three days to be subjected to a detailed customs inspection.

You receive a notification on your phone that a critical shipment from your China factory has missed its filing deadline with the customs broker. Your logistics manager is alerted that there is an 80% chance that the components he’s waiting for are likely to be delayed another 48 hours by excessive port traffic and your GTM software advises diverting the shipment to an alternate port facility. Your compliance officer is informed that there is a 95% chance that a shipment of parts from Malaysia is likely to be held for up to three days to be subjected to a detailed customs inspection.

If you think this type of information would be of great assistance to your supply chain business planning and operations, you are not alone. It is this type of integrated data and communications that are becoming the backbone of the Big Data led revolution underway in supply chain.

The human brain can only process and make use of a limited amount of information before it becomes overwhelmed and unable to effectively recognise patterns and trends. But powerful algorithms and the software platforms they drive can take in almost unlimited numbers of data points and process them to generate insights impossible for an individual or even an entire organisation of individuals to identify. And powering this technology-driven transformation of supply chain is Big Data.

Big Data vs small data

To really understand how technology is transforming supply chain, it is important to understand how Big Data differs from any other form of information gathering. Data has always been crucial to efficient supply chain operations so what has actually changed in recent years? How is “Big Data” different from the analysis of “small data” that has always occurred in the industry?

Big Data refers to sets of both structured and unstructured data with so much volume that traditional data processing systems are inadequate to cope with it all. It can be further defined by some of the basic properties that apply to it:

  • Variety – data being generated from a wide number of varied sources
  • Volume – while there is no set distinction between where small data stops and Big Data starts, Big Data requires large storage requirements for the data, often measured in many multiples of terabytes
  • Velocity – the speed at which the data can be acquired, transferred and stored
  • Complexity – difficulties encountered in forming relevant relationships in data, especially when it is taken from multiple sources
  • Value – the degree to which querying the data will result in generating beneficial outcomes

The most important property related to Big Data is as the name implies, volume. We normally think of data purely in terms of text or numbers but it can include everything from the billions of emails, images, and tweets generated every day. In fact, data generation is expanding at a rate that doubles every two years. And human and machine-generated data is growing at 10 times the rate of traditional business data. IT World Canada projects that by 2020, you would need a stack of iPad Air tablets extending from the earth to the moon to store the world’s digital data.

But the real focus behind a preference for Big Data analysis over small data systems is the ability to uncover hidden trends and relationships in both structured and unstructured data. In most cases, using small data collection and analytics processes simply cannot identify crucial information in a timely manner to allow key decisions to be made or opportunities to be taken advantage of. In other cases, using small data systems is simply a waste of resources and leads to disruptions to supply chain operations.

By contrast, if used correctly Big Data is the key to enhancing supply chain performance by increasing visibility, control, agility, and responsiveness. Making decisions based on high quality information in context can benefit the full range of supply chain operations – from demand forecasting, inventory and logistics planning, execution, shipping, and warehouse management.

Big Data possibilities

Big Data analytics becomes a vital tool for making sense of the huge volumes of data that are produced every day. This data comes from a whole range of activities undertaken by people associated with supply chain, whether they be customers, suppliers, or your own staff. The range and volume of this data is continuously increasing, with billions of data points generated by sources we see as directly linked to supply chain such as network nodes and transaction and shipping records as well as other areas that more indirectly impact supply chains such as retail channels and social media content.

But it is increasingly becoming necessary to harness this data in order to remain competitive. This is evident from statements made by people such as Anthony Coops, Asia Pacific Data and Analytics Leader at KPMG Australia, who believes that “Big Data is certainly enabling better decisions and actions, and facilitating a move away from gut feel decision making.” At the same time, he recognises that solutions need to be put in place that allows for people and organisations to have complete faith in the data so that managers can really trust in the analytics and be confident in their decision making.

The need for confidence in the analytics is evident when considering the examples such as where GTM software has the information and capabilities to advise ahead of time to divert shipping stock to an alternate port or that a product is likely to be held up in customs. These types of decisions have potentially large financial consequences but when implemented correctly, it is easy to see how supply chain operational efficiency can be significantly boosted by effective use of Big Data analytics.

Many organisations are also using Big Data solutions to support integrated business planning and to better understand market trends and consumer behaviours. The integration of a range of market, product sales, social media trends, and demographic data from multiple data sources provides the capability to accurately predict and plan numerous supply chain actions.

IoT and AI-based analytics are used to predict asset maintenance requirements and avoid unscheduled downtime. IoT can also provide real-time production and shipping data while GPS driven data combined with traffic and weather information allows for dynamically planned and optimised shipping and delivery routes. These types of examples provide a glimpse into the possibilities and advantages that Big Data can offer in increasing the agility and efficiency of supply chain operations.

Disruptive technologies

What is driving these possibilities is the development of numerous disruptive technologies as well as the integration of both new and existing technologies to create high-quality networks of information. Disruptive technologies impact the way organisations operate by forcing them to deal with new competitive platforms. They also provide them with opportunities to enter new markets or to change the company’s competitive status. By identifying key disruptive technologies early, supply chain organisations can not only be better placed to adapt to changing market conditions, they can also gain a distinct advantage over others in the industry that are reluctant to embrace change.

In terms of Big Data based disruptive technologies, these are largely driven by the effects of constantly evolving and emergent internet technologies such as the Internet of Things combined with increased computing power, AI and machine learning based analytics platforms, and fast, pervasive digital communications. These technologies then act as drivers that spawn new ways of managing products, assets, and staff as well as generating new ways of thinking about organisational structures and workflows.

IoT

After being talked about for many years, we are now starting to see the Internet of Things really taking shape. There will be a thirty-fold increase in the number of Internet-connected physical devices by 2020 and this will significantly impact the ways that supply chains operate.

IoT allows for numerous solutions to intelligently connect systems, people, processes, data, and devices via a network of connected sensors. Through improved data collection and intelligence, supply chain will benefit from greater automation of the manufacturing and shipping process becomes possible through enhanced visibility of activities from the warehouse to the customer.

Cloud-based GPS and Radio Frequency Identification (RFID) technologies, which provide location, product identification and other tracking information play a key role in the IoT landscape. Sensors can be used to provide a wealth of information targeted to specific niches within supply chain such as fresh produce distribution where temperature or humidity levels can be precisely tracked along the entire journey of a product. Data gathered from GPS and RFID technologies also facilitates automated shipping and delivery processes by precisely predicting the time of arrival.

Big Data analytics

Big Data analytics encompasses the qualitative and quantitative techniques that are used to generate insights to enhance productivity. The more supply chain technologies are reliant on Big Data, either in their business model or as a result of their impact on an organisation, the more organisations have to rely on the effective use of Big Data analytics to help them make sense of the volumes of data being generated. Analytics also helps to make it possible to understand the processes and strategies used by competitors across the industry. Using analytics effectively allows an organisation to make the best decisions to ensure they stay at the forefront of their particular market sector.

As corporations face financial pressures to increase profit margins and customer expectation pressures to shorten delivery times. the importance of Big Data analytics continues to grow. A Gartner, Inc. study put the 2017 business intelligence and analytics market at a value over USD$18 billion, while the sales of prescriptive analytics software is estimated to grow from approximately USD$415 million in 2014 to USD$1.1 billion in 2019.

Over time, the effectiveness and capabilities of analytics software also continue to improve as machine learning-based technologies take forecast data and continually compare it back to real operational and production data. This means that the longer an organisation operates its analytics software, the iterative nature of artificial intelligence powered algorithms means that the performance and value of the software improve over time. This leads to benefits such as more accurate forecasts of shipping times or supplier obstacles and bottlenecks.

Consumer behaviour analysis

Although it may not initially seem as vital to supply chain as other disruptive technologies, consumer behaviour analysis can have a huge impact on businesses working in supply chain, especially e-commerce businesses. Known as clickstream analysis, large amounts of company, industry, product, and customer information can be gathered from the web. Various text and web mining tools and techniques are then used to both organise and visualise this information.

By analysing customer clickstream data logs, web analytics tools such as Google Analytics can provide a trail of online customer activities and provide insights on their purchasing patterns. This allows more accurate seasonal forecasts to be generated that can then drive inventory and resourcing plans. This type of data is extremely valuable and is crucial for any organisations operating in the e-commerce space. While retailers and consumer companies have always collected data on buying patterns, the ability to pull together information from potentially thousands of different variables that have traditionally been collected in silos provides enormous economic opportunities.

Potential drawbacks and challenges

Despite the huge opportunities presented by implementing Big Data powered solutions, there can be intimidating barriers to entry when it comes to putting in place Big Data collection and analytics solutions. This can emerge across a range of areas including the complexities around data collection and the difficulties of putting in place the technologies and infrastructure needed to turn that data into useful insights.

Getting complete buy-in

One impediment to adopting a holistic Big Data approach centres around having unified support at all levels of your company to adopt comprehensive Big Data systems. Management commitment and support are crucial and large-scale initiatives of this type usually occur from the top down. However, Big Data analytics type initiatives usually originate at mid-level, from people who actually collect and use data day to day. This means that for this issue, the need for implementation must often be sold upwards. In some cases, upselling the importance of Big Data to management that doesn’t understand why that type of expense is necessary is extremely challenging.

Sourcing clean data

One of the other main challenges is undoubtedly sourcing appropriate and consistent data. There’s no use getting high-quality data if it doesn’t directly apply to your particular market sector. Nor is there much benefit to be gained from obtaining high-quality data but being unable to consistently source it at the same regularity to enable it to build a long-term profile of the company’s operations and market forces. These challenges are often related to technical issues such as integration with previously siloed data or data security concerns.

Richard Sharpe, CEO of Competitive Insights, a supply chain analytics company, believes that the data quality problem is a complex issue that can have many different causes. However, he believes that these challenges can be overcome by management having a clear understanding of what they’re trying to achieve. “You have to show that what you’re ultimately trying to do with supply chain data analytics is to make the enterprise more successful and profitable.” This then leads to support being provided by company leadership who, in tandem with operations managers, can develop the processes required to govern quality data collection. This includes proper consultation with subject matter experts who can help ensure that all data is properly validated.

Managing data volumes

New technologies make it possible for supply chain organisations to collect huge volumes of information from an ever-expanding number of sources. These data points can quickly run into the billions, making it challenging to analyse with any level of accuracy or lead directly to innovation and improvement.

This means that despite many organisations embracing Big Data strategies, many do not actually derive sustainable value from the data they’re accumulating because they begin to drown in the sheer volume of data or don’t have the appropriate software and management tools to make use of it. A common phrase used to summarise this effect is “paralysis by analysis”. Without a thorough understanding of the technologies and systems needed to process and store the data collected, this can be an easy condition for an organisation to become afflicted by.

Building the infrastructure

Companies need to invest in the right technologies to have a true 360-degree view of their business. And in many cases, these technologies can involve large initial capital outlays. Getting the infrastructure in place is key to being able to collect, process, and analyse data that enables you to track inventory, assets and materials in your supply chain.

Putting in place the infrastructure may also require additional training expenses, so that staff are properly trained in how to use new software platforms or to maintain sensors and other new IoT devices. In some cases, this will extend to requiring hiring new talent capable of using and interpreting new analytical tools.

Conclusion

Big Data offers huge opportunities to supply chain organisations, as vital information contained within multiple data sources can now be consolidated and analysed. These new perspectives can reveal the insights necessary to understand and solve problems that were previously considered too complex. New insights can also encourage organisations to scale intelligent systems across all activities in the supply chain, embedding intelligence in every part of the business.

There is also no doubt that implementing comprehensive Big Data solutions can involve new and significant challenges. However, once the new infrastructure and processes are in place, the nature of modern Cloud-based networks allows for data to be accessed easily from anywhere at any time. It also allows for other benefits beyond cost reduction and production gains to be realised over time, such as ongoing rather than just one-off efficiency gains and improved transparency and compliance tracking across the entire organisation.

Bastian Managing Director, Tony Richter, is a supply chain industry expert with 7+ years executing senior supply chain search across APAC. He works exclusively with a small portfolio of clients and prides himself on the creation of a transparent, credible, and focused approach. This ensures long-term trust can be established with all clients and candidates.

Source: Bastian Consulting

What is AI?

What is AI?

There is a mountain of hype around big data, artificial intelligence (AI), and machine learning. It’s a bit like kissing in the schoolyard – everyone is talking about it, but few are really doing it, and nobody is doing it well (shoutout to my friend Steve Totman at Cloudera for that line). There is certainly broad consensus that organizations need to be monetizing their data. But with all the noise around these new technologies, I think many business leaders are left scratching their heads about what it all means.

There is a mountain of hype around big data, artificial intelligence (AI), and machine learning. It’s a bit like kissing in the schoolyard – everyone is talking about it, but few are really doing it, and nobody is doing it well (shoutout to my friend Steve Totman at Cloudera for that line). There is certainly broad consensus that organizations need to be monetizing their data. But with all the noise around these new technologies, I think many business leaders are left scratching their heads about what it all means.

Given the huge diversity of applications and opinions on this topic, it may be folly, but I’d like to attempt to provide a practical, useful definition of artificial intelligence. While my definition probably won’t win any accolades for theoretical accuracy, I believe that it will provide a useful framework for talking about the specific actions that an organization needs to take in order to make the most of their data.

The theoretical definition

If you asked a computer scientist (or Will Smith), AI is what you get when you create a computer that is capable of thinking for itself. It’s Hal from 2001: A Space Odyssey or Lt. Commander Data from Star Trek: The Next Generation (two of the greatest masterpieces of all time). These computers are self-aware: thinking, independent machines that are (unfortunately) very likely to take over the world.

While that definition may be strictly accurate from the ivory tower, it’s not particularly practical. No scientist created such a thing, and no business is really considering utilizing such an entity in their business model.

Laying aside that definition, then, let’s look to something much more practical that can actually move the conversation forward in business.

AI is not machine learning

There are two main concepts, according to my definitions, that are important. AI is one, and I shall define it shortly. Machine learning is the second. There’s just as much confusion about the definition of machine learning as there is about AI, and I think it’s important to point out that they’re not the same.

Machine learning is known by other names. Harvard Business Review called it data science, and dubbed it the sexiest job of the 21st century – which is a pretty bold claim, given that there are a lot of years left until the 22nd century. Years ago, it was called “statistics” or “predictive modeling.”

Whatever you call it, machine learning is method of using historical data to make predictions about the future. The machine learns from those historical examples to build a model that can then be used to make predictions about new data.

For example, credit card companies need to detect fraudulent transactions in real-time so that they can block them. Losing money to fraud is a big problem to card providers, and detecting fraud is an ideal machine learning problem. Credit card providers have a mountain of historical transactions, some of which were flagged as being fraudulent. Using machine learning, the historical transactions can be used to train a model. That model is basically a machine that looks at a transaction and judges how likely it is to be fraud.

Another common example in the healthcare space is predicting patient outcomes. Suppose a patient goes to the ER and ends up getting an infection while they’re in the hospital. That’s a bad outcome for the patient (obviously), but also for the hospital and the insurance companies and so on. It’s in everyone’s interest to try to prevent these kinds of incidents.

Healthcare providers frequently use past patient data (including information on patients that both did and did not have a bad outcome) in order to build models that can predict whether or not a particular patient is likely to have a bad outcome in the future.

Machine learning models are very narrowly defined. They predict an event or a number. Is the patient going to get sicker? How much pipeline will my sales team generate next quarter? Will this potential customer respond to my marketing message? The models are designed to answer a very specific question by making a very specific prediction, and in turn become important inputs into AI solutions.

Artificial intelligence combines data, business logic, and predictions

Having a machine learning model is like having a superpower or a crystal ball. I can feed it data and it will make predictions about the future. These models can identify potentially bad loans before they default. They can forecast revenue out into the future. They can highlight places where crimes are likely to occur. The AI system is how you put them to practical use.

Let’s go back to the credit card fraud example. Suppose I could tell you by means of a machine learning model whether or not a transaction was likely to be fraudulent. What would you do? Even thinking about it for a minute makes it obvious that there’s a lot more work to do before you can start getting value out of that model.

Here are some questions that you need to consider in this example:

  1. What data is available to me at the time of the transaction?
  2. How much time do I have in order to process the data and reject the transaction?
  3. What regulations restrict my ability to block potentially fraudulent transactions?>
  4. Nobody likes having legitimate transactions blocked. What customer experience concerns do I need to address?
  5. What false positive rates and false negative rates am I comfortable with?
  6. …and so on

There are many more questions that a credit card provider would need to consider before implementing a system to block potentially fraudulent transactions.

That system, though, is what I call AI. It’s the combination of all the business logic, all the data, and all the predictions that I need in order to automate a decision or a process.

  • Business Logic: Business logic is probably the most important aspect of implementing an AI system. It encompasses the user experience, the legal compliance issues, the various thresholds and flags that I may need, and so on. It’s basically the glue that holds together the whole process
  • Data: AI systems reach out for data. They might need to aggregate customer data, summarize transactions, collect a measurement from a sensor, and so on. Regardless of where it comes from, data drives the AI system; without it, the system comes screeching to a halt.
  • Predictions: Not every AI system uses data, but all of the good ones do. Anyone that has ever called their cable provider has dealt with the endless automated phone system. They’re trying to automate a process, but they’re not being smart about it. It’s dumb AI. Smart AI might make predictions about why I was calling and attempt to route me to the right place, for instance. Predictions are the technology that makes AI truly smart.

Source: Datarobot

The Business of Artificial Intelligence

The Business of Artificial Intelligence

For more than 250 years the fundamental drivers of economic growth have been technological innovations. The most important of these are what economists call general-purpose technologies — a category that includes the steam engine, electricity, and the internal combustion engine. Each one catalysed waves of complementary innovations and opportunities. The internal combustion engine, for example, gave rise to cars, trucks, airplanes, chain saws, and lawnmowers, along with big-box retailers, shopping centres, cross-docking warehouses, new supply chains, and, when you think about it, suburbs. Companies as diverse as Walmart, UPS, and Uber found ways to leverage the technology to create profitable new business models.

For more than 250 years the fundamental drivers of economic growth have been technological innovations. The most important of these are what economists call general-purpose technologies — a category that includes the steam engine, electricity, and the internal combustion engine. Each one catalysed waves of complementary innovations and opportunities. The internal combustion engine, for example, gave rise to cars, trucks, airplanes, chain saws, and lawnmowers, along with big-box retailers, shopping centres, cross-docking warehouses, new supply chains, and, when you think about it, suburbs. Companies as diverse as Walmart, UPS, and Uber found ways to leverage the technology to create profitable new business models.

The most important general-purpose technology of our era is artificial intelligence, particularly machine learning (ML) — that is, the machine’s ability to keep improving its performance without humans having to explain exactly how to accomplish all the tasks it’s given. Within just the past few years machine learning has become far more effective and widely available. We can now build systems that learn how to perform tasks on their own.

Why is this such a big deal? Two reasons. First, we humans know more than we can tell: We can’t explain exactly how we’re able to do a lot of things — from recognizing a face to making a smart move in the ancient Asian strategy game of Go. Prior to ML, this inability to articulate our own knowledge meant that we couldn’t automate many tasks. Now we can.

Second, ML systems are often excellent learners. They can achieve superhuman performance in a wide range of activities, including detecting fraud and diagnosing disease. Excellent digital learners are being deployed across the economy, and their impact will be profound.

In the sphere of business, AI is poised have a transformational impact, on the scale of earlier general-purpose technologies. Although it is already in use in thousands of companies around the world, most big opportunities have not yet been tapped. The effects of AI will be magnified in the coming decade, as manufacturing, retailing, transportation, finance, health care, law, advertising, insurance, entertainment, education, and virtually every other industry transform their core processes and business models to take advantage of machine learning. The bottleneck now is in management, implementation, and business imagination.

Like so many other new technologies, however, AI has generated lots of unrealistic expectations. We see business plans liberally sprinkled with references to machine learning, neural nets, and other forms of the technology, with little connection to its real capabilities. Simply calling a dating site “AI-powered,” for example, doesn’t make it any more effective, but it might help with fundraising. This article will cut through the noise to describe the real potential of AI, its practical implications, and the barriers to its adoption.

 

WHAT CAN AI DO TODAY?

The term artificial intelligence was coined in 1955 by John McCarthy, a math professor at Dartmouth who organized the seminal conference on the topic the following year. Ever since, perhaps in part because of its evocative name, the field has given rise to more than its share of fantastic claims and promises. In 1957 the economist Herbert Simon predicted that computers would beat humans at chess within 10 years. (It took 40.) In 1967 the cognitive scientist Marvin Minsky said, “Within a generation the problem of creating ‘artificial intelligence’ will be substantially solved.” Simon and Minsky were both intellectual giants, but they erred badly. Thus it’s understandable that dramatic claims about future breakthroughs meet with a certain amount of scepticism.

Let’s start by exploring what AI is already doing and how quickly it is improving. The biggest advances have been in two broad areas: perception and cognition. In the former category some of the most practical advances have been made in relation to speech. Voice recognition is still far from perfect, but millions of people are now using it — think Siri, Alexa, and Google Assistant. The text you are now reading was originally dictated to a computer and transcribed with sufficient accuracy to make it faster than typing. A study by the Stanford computer scientist James Landay and colleagues found that speech recognition is now about three times as fast, on average, as typing on a cell phone. The error rate, once 8.5%, has dropped to 4.9%. What’s striking is that this substantial improvement has come not over the past 10 years but just since the summer of 2016.

Image recognition, too, has improved dramatically. You may have noticed that Facebook and other apps now recognize many of your friends’ faces in posted photos and prompt you to tag them with their names. An app running on your smartphone will recognize virtually any bird in the wild. Image recognition is even replacing ID cards at corporate headquarters. Vision systems, such as those used in self-driving cars, formerly made a mistake when identifying a pedestrian as often as once in 30 frames (the cameras in these systems record about 30 frames a second); now they err less often than once in 30 million frames. The error rate for recognizing images from a large database called ImageNet, with several million photographs of common, obscure, or downright weird images, fell from higher than 30% in 2010 to about 4% in 2016 for the best systems. (See the exhibit “Puppy or Muffin?”)

The speed of improvement has accelerated rapidly in recent years as a new approach, based on very large or “deep” neural nets, was adopted. The ML approach for vision systems is still far from flawless — but even people have trouble quickly recognizing puppies’ faces or, more embarrassingly, see their cute faces where none exist.

The second type of major improvement has been in cognition and problem solving. Machines have already beaten the finest (human) players of poker and Go — achievements that experts had predicted would take at least another decade. Google’s DeepMind team has used ML systems to improve the cooling efficiency at data centres by more than 15%, even after they were optimized by human experts. Intelligent agents are being used by the cybersecurity company Deep Instinct to detect malware, and by PayPal to prevent money laundering. A system using IBM technology automates the claims process at an insurance company in Singapore, and a system from Lumidatum, a data science platform firm, offers timely advice to improve customer support. Dozens of companies are using ML to decide which trades to execute on Wall Street, and more and more credit decisions are made with its help. Amazon employs ML to optimize inventory and improve product recommendations to customers. Infinite Analytics developed one ML system to predict whether a user would click on a particular ad, improving online ad placement for a global consumer packaged goods company, and another to improve customers’ search and discovery process at a Brazilian online retailer. The first system increased advertising ROI threefold, and the second resulted in a $125 million increase in annual revenue.

UNDERSTANDING MACHINE LEARNING

The most important thing to understand about ML is that it represents a fundamentally different approach to creating software: The machine learns from examples, rather than being explicitly programmed for a particular outcome. This is an important break from previous practice. For most of the past 50 years, advances in information technology and its applications have focused on codifying existing knowledge and procedures and embedding them in machines. Indeed, the term “coding” denotes the painstaking process of transferring knowledge from developers’ heads into a form that machines can understand and execute. This approach has a fundamental weakness: Much of the knowledge we all have is tacit, meaning that we can’t fully explain it. It’s nearly impossible for us to write down instructions that would enable another person to learn how to ride a bike or to recognize a friend’s face.

In other words, we all know more than we can tell. This fact is so important that it has a name: Polanyi’s Paradox, for the philosopher and polymath Michael Polanyi, who described it in 1964. Polanyi’s Paradox not only limits what we can tell one another but has historically placed a fundamental restriction on our ability to endow machines with intelligence. For a long time that limited the activities that machines could productively perform in the economy.

Machine learning is overcoming those limits. In this second wave of the second machine age, machines built by humans are learning from examples and using structured feedback to solve on their own problems such as Polanyi’s classic one of recognizing a face.

DIFFERENT FLAVORS OF MACHINE LEARNING

Artificial intelligence and machine learning come in many flavours, but most of the successes in recent years have been in one category: supervised learning systems, in which the machine is given lots of examples of the correct answer to a particular problem. This process almost always involves mapping from a set of inputs, X, to a set of outputs, Y. For instance, the inputs might be pictures of various animals, and the correct outputs might be labels for those animals: dog, cat, horse. The inputs could also be waveforms from a sound recording and the outputs could be words: “yes,” “no,” “hello,” “good-bye.” (See the exhibit “Supervised Learning Systems.”)

Successful systems often use a training set of data with thousands or even millions of examples, each of which has been labelled with the correct answer. The system can then be let loose to look at new examples. If the training has gone well, the system will predict answers with a high rate of accuracy.

The algorithms that have driven much of this success depend on an approach called deep learning, which uses neural networks. Deep learning algorithms have a significant advantage over earlier generations of ML algorithms: They can make better use of much larger data sets. The old systems would improve as the number of examples in the training data grew, but only up to a point, after which additional data didn’t lead to better predictions. According to Andrew Ng, one of the giants of the field, deep neural nets don’t seem to level off in this way: More data leads to better and better predictions. Some very large systems are trained by using 36 million examples or more. Of course, working with extremely large data sets requires more and more processing power, which is one reason the very big systems are often run on supercomputers or specialized computer architectures.

Any situation in which you have a lot of data on behaviour and are trying to predict an outcome is a potential application for supervised learning systems. Jeff Wilke, who leads Amazon’s consumer business, says that supervised learning systems have largely replaced the memory-based filtering algorithms that were used to make personalized recommendations to customers. In other cases, classic algorithms for setting inventory levels and optimizing supply chains have been replaced by more efficient and robust systems based on machine learning. JPMorgan Chase introduced a system for reviewing commercial loan contracts; work that used to take loan officers 360,000 hours can now be done in a few seconds. And supervised learning systems are now being used to diagnose skin cancer. These are just a few examples.

It’s comparatively straightforward to label a body of data and use it to train a supervised learner; that’s why supervised ML systems are more common than unsupervised ones, at least for now. Unsupervised learning systems seek to learn on their own. We humans are excellent unsupervised learners: We pick up most of our knowledge of the world (such as how to recognize a tree) with little or no labelled data. But it is exceedingly difficult to develop a successful machine learning system that works this way.

If and when we learn to build robust unsupervised learners, exciting possibilities will open up. These machines could look at complex problems in fresh ways to help us discover patterns — in the spread of diseases, in price moves across securities in a market, in customers’ purchase behaviours, and so on — that we are currently unaware of. Such possibilities lead Yann LeCun, the head of AI research at Facebook and a professor at NYU, to compare supervised learning systems to the frosting on the cake and unsupervised learning to the cake itself.

 

Another small but growing area within the field is reinforcement learning. This approach is embedded in systems that have mastered Atari video games and board games like Go. It is also helping to optimize data centre power usage and to develop trading strategies for the stock market. Robots created by Kindred use machine learning to identify and sort objects they’ve never encountered before, speeding up the “pick and place” process in distribution centres for consumer goods. In reinforcement learning systems the programmer specifies the current state of the system and the goal, lists allowable actions, and describes the elements of the environment that constrain the outcomes for each of those actions. Using the allowable actions, the system has to figure out how to get as close to the goal as possible. These systems work well when humans can specify the goal but not necessarily how to get there. For instance, Microsoft used reinforcement learning to select headlines for MSN.com news stories by “rewarding” the system with a higher score when more visitors clicked on the link. The system tried to maximize its score on the basis of the rules its designers gave it. Of course, this means that a reinforcement learning system will optimize for the goal you explicitly reward, not necessarily the goal you really care about (such as lifetime customer value), so specifying the goal correctly and clearly is critical.

PUTTING MACHINE LEARNING TO WORK

There are three pieces of good news for organizations looking to put ML to use today. First, AI skills are spreading quickly. The world still has not nearly enough data scientists and machine learning experts, but the demand for them is being met by online educational resources as well as by universities. The best of these, including Udacity, Coursera, and fast.ai, do much more than teach introductory concepts; they can actually get smart, motivated students to the point of being able to create industrial-grade ML deployments. In addition to training their own people, interested companies can use online talent platforms such as Upwork, Topcoder, and Kaggle to find ML experts with verifiable expertise.

The second welcome development is that the necessary algorithms and hardware for modern AI can be bought or rented as needed. Google, Amazon, Microsoft, Salesforce, and other companies are making powerful ML infrastructure available via the cloud. The cutthroat competition among these rivals means that companies that want to experiment with or deploy ML will see more and more capabilities available at ever-lower prices over time.

The final piece of good news, and probably the most underappreciated, is that you may not need all that much data to start making productive use of ML. The performance of most machine learning systems improves as they’re given more data to work with, so it seems logical to conclude that the company with the most data will win. That might be the case if “win” means “dominate the global market for a single application such as ad targeting or speech recognition.” But if success is defined instead as significantly improving performance, then sufficient data is often surprisingly easy to obtain.

For example, Udacity cofounder Sebastian Thrun noticed that some of his salespeople were much more effective than others when replying to inbound queries in a chat room. Thrun and his graduate student Zayd Enam realized that their chat room logs were essentially a set of labelled training data — exactly what a supervised learning system needs. Interactions that led to a sale were labelled successes, and all others were labelled failures. Zayd used the data to predict what answers successful salespeople were likely to give in response to certain very common inquiries and then shared those predictions with the other salespeople to nudge them toward better performance. After 1,000 training cycles, the salespeople had increased their effectiveness by 54% and were able to serve twice as many customers at a time.

The AI startup WorkFusion takes a similar approach. It works with companies to bring higher levels of automation to back-office processes such as paying international invoices and settling large trades between financial institutions. The reason these processes haven’t been automated yet is that they’re complicated; relevant information isn’t always presented the same way every time (“How do we know what currency they’re talking about?”), and some interpretation and judgment are necessary. WorkFusion’s software watches in the background as people do their work and uses their actions as training data for the cognitive task of classification (“This invoice is in dollars. This one is in yen. This one is in euros…”). Once the system is confident enough in its classifications, it takes over the process.

Machine learning is driving changes at three levels: tasks and occupations, business processes, and business models. An example of task-and-occupation redesign is the use of machine vision systems to identify potential cancer cells — freeing up radiologists to focus on truly critical cases, to communicate with patients, and to coordinate with other physicians. An example of process redesign is the reinvention of the workflow and layout of Amazon fulfilment centres after the introduction of robots and optimization algorithms based on machine learning. Similarly, business models need to be rethought to take advantage of ML systems that can intelligently recommend music or movies in a personalized way. Instead of selling songs à la carte on the basis of consumer choices, a better model might offer a subscription to a personalized station that predicted and played music a particular customer would like, even if the person had never heard it before.

Note that machine learning systems hardly ever replace the entire job, process, or business model. Most often they complement human activities, which can make their work ever more valuable. The most effective rule for the new division of labour is rarely, if ever, “give all tasks to the machine.” Instead, if the successful completion of a process requires 10 steps, one or two of them may become automated while the rest become more valuable for humans to do. For instance, the chat room sales support system at Udacity didn’t try to build a bot that could take over all the conversations; rather, it advised human salespeople about how to improve their performance. The humans remained in charge but became vastly more effective and efficient. This approach is usually much more feasible than trying to design machines that can do everything humans can do. It often leads to better, more satisfying work for the people involved and ultimately to a better outcome for customers.

Designing and implementing new combinations of technologies, human skills, and capital assets to meet customers’ needs requires large-scale creativity and planning. It is a task that machines are not very good at. That makes being an entrepreneur or a business manager one of society’s most rewarding jobs in the age of ML.

RISKS AND LIMITS

The second wave of the second machine age brings with it new risks. In particular, machine learning systems often have low “interpretability,” meaning that humans have difficulty figuring out how the systems reached their decisions. Deep neural networks may have hundreds of millions of connections, each of which contributes a small amount to the ultimate decision. As a result, these systems’ predictions tend to resist simple, clear explanation. Unlike humans, machines are not (yet!) good storytellers. They can’t always give a rationale for why a particular applicant was accepted or rejected for a job, or a particular medicine was recommended. Ironically, even as we have begun to overcome Polanyi’s Paradox, we’re facing a kind of reverse version: Machines know more than they can tell us.

This creates three risks. First, the machines may have hidden biases, derived not from any intent of the designer but from the data provided to train the system. For instance, if a system learns which job applicants to accept for an interview by using a data set of decisions made by human recruiters in the past, it may inadvertently learn to perpetuate their racial, gender, ethnic, or other biases. Moreover, these biases may not appear as an explicit rule but, rather, be embedded in subtle interactions among the thousands of factors considered.

A second risk is that, unlike traditional systems built on explicit logic rules, neural network systems deal with statistical truths rather than literal truths. That can make it difficult, if not impossible, to prove with complete certainty that the system will work in all cases — especially in situations that weren’t represented in the training data. Lack of verifiability can be a concern in mission-critical applications, such as controlling a nuclear power plant, or when life-or-death decisions are involved.

Third, when the ML system does make errors, as it almost inevitably will, diagnosing and correcting exactly what’s going wrong can be difficult. The underlying structure that led to the solution can be unimaginably complex, and the solution may be far from optimal if the conditions under which the system was trained change.

While all these risks are very real, the appropriate benchmark is not perfection but the best available alternative. After all, we humans, too, have biases, make mistakes, and have trouble explaining truthfully how we arrived at a particular decision. The advantage of machine-based systems is that they can be improved over time and will give consistent answers when presented with the same data.

Does that mean there is no limit to what artificial intelligence and machine learning can do? Perception and cognition cover a great deal of territory — from driving a car to forecasting sales to deciding whom to hire or promote. We believe the chances are excellent that AI will soon reach superhuman levels of performance in most or all of these areas. So what won’t AI and ML be able to do?

We sometimes hear “Artificial intelligence will never be good at assessing emotional, crafty, sly, inconsistent human beings — it’s too rigid and impersonal for that.” We don’t agree. ML systems like those at Affectiva are already at or beyond human-level performance in discerning a person’s emotional state on the basis of tone of voice or facial expression. Other systems can infer when even the world’s best poker players are bluffing well enough to beat them at the amazingly complex game Heads-up No-Limit Texas Hold’em. Reading people accurately is subtle work, but it’s not magic. It requires perception and cognition — exactly the areas in which ML is currently strong and getting stronger all the time.

A great place to start a discussion of the limits of AI is with Pablo Picasso’s observation about computers: “But they are useless. They can only give you answers.” They’re actually far from useless, as ML’s recent triumphs show, but Picasso’s observation still provides insight. Computers are devices for answering questions, not for posing them. That means entrepreneurs, innovators, scientists, creators, and other kinds of people who figure out what problem or opportunity to tackle next, or what new territory to explore, will continue to be essential.

WHILE ALL THE RISKS OF AI ARE VERY REAL, THE APPROPRIATE BENCHMARK IS NOT PERFECTION BUT THE BEST AVAILABLE ALTERNATIVE.


Similarly, there’s a huge difference between passively assessing someone’s mental state or morale and actively working to change it. ML systems are getting quite good at the former but remain well behind us at the latter. We humans are a deeply social species; other humans, not machines, are best at tapping into social drives such as compassion, pride, solidarity, and shame in order to persuade, motivate, and inspire. In 2014 the TED Conference and the XPrize Foundation announced an award for “the first artificial intelligence to come to this stage and give a TED Talk compelling enough to win a standing ovation from the audience.” We doubt the award will be claimed anytime soon.

We think the biggest and most important opportunities for human smarts in this new age of super powerful ML lie at the intersection of two areas: figuring out what problems to work on next, and persuading a lot of people to tackle them and go along with the solutions. This is a decent definition of leadership, which is becoming much more important in the second machine age.

The status quo of dividing up work between minds and machines is falling apart very quickly. Companies that stick with it are going to find themselves at an ever-greater competitive disadvantage compared with rivals who are willing and able to put ML to use in all the places where it is appropriate and who can figure out how to effectively integrate its capabilities with humanity’s.

A time of tectonic change in the business world has begun, brought on by technological progress. As was the case with steam power and electricity, it’s not access to the new technologies themselves, or even to the best technologists, that separates winners from losers. Instead, it’s innovators who are open-minded enough to see past the status quo and envision very different approaches, and savvy enough to put them into place. One of machine learning’s greatest legacies may well be the creation of a new generation of business leaders.

In our view, artificial intelligence, especially machine learning, is the most important general-purpose technology of our era. The impact of these innovations on business and the economy will be reflected not only in their direct contributions but also in their ability to enable and inspire complementary innovations. New products and processes are being made possible by better vision systems, speech recognition, intelligent problem solving, and many other capabilities that machine learning delivers.

Some experts have gone even further. Gil Pratt, who now heads the Toyota Research Institute, has compared the current wave of AI technology to the Cambrian explosion 500 million years ago that birthed a tremendous variety of new life forms. Then as now, one of the key new capabilities was vision. When animals first gained this capability, it allowed them to explore the environment far more effectively; that catalyzed an enormous increase in the number of species, both predators and prey, and in the range of ecological niches that were filled. Today as well we expect to see a variety of new products, services, processes, and organizational forms and also numerous extinctions. There will certainly be some weird failures along with unexpected successes.

Although it is hard to predict exactly which companies will dominate in the new environment, a general principle is clear: The most nimble and adaptable companies and executives will thrive. Organizations that can rapidly sense and respond to opportunities will seize the advantage in the AI-enabled landscape. So the successful strategy is to be willing to experiment and learn quickly. If managers aren’t ramping up experiments in the area of machine learning, they aren’t doing their job. Over the next decade, AI won’t replace managers, but managers who use AI will replace those who don’t.

AI in 8 minutes

AI in 8 minutes

Knowing a little about everything is often better than having one expert skill. This is particularly true for people entering the debate in emerging markets. Most notably, tech.

 

Most folks think they know a little about AI. But the field is so new and growing so fast that the current experts are breaking new ground daily. There is so much science to uncover that technologists and policymakers from other areas can contribute rapidly in the field of AI.

Knowing a little about everything is often better than having one expert skill. This is particularly true for people entering the debate in emerging markets. Most notably, tech.

 

Most folks think they know a little about AI. But the field is so new and growing so fast that the current experts are breaking new ground daily. There is so much science to uncover that technologists and policymakers from other areas can contribute rapidly in the field of AI.

 

That’s where this article comes in. My aim was to create a short reference which will bring technically minded people up to speed quickly with AI terms, language and techniques. Hopefully, this text can be understood by most non-practitioners whilst serving as a reference to everybody.

 

Introduction

Artificial intelligence (AI), deep learning, and neural networks are terms used to describe powerful machine learning-based techniques which can solve many real-world problems.

 

While deductive reasoning, inference, and decision-making comparable to the human brain is a little way off, there have been many recent advances in AI techniques and associated algorithms. Particularly with the increasing availability of large data sets from which AI can learn.

 

The field of AI draws on many fields including mathematics, statistics, probability theory, physics, signal processing, machine learning, computer science, psychology, linguistics, and neuroscience. Issues surrounding the social responsibility and ethics of AI draw parallels with many branches of philosophy.

 

The motivation for advancing AI techniques further is that the solutions required to solve problems with many variables are incredibly complicated, difficult to understand and not easy to put together manually.

 

Increasingly, corporations, researchers and individuals are relying on machine learning to solve problems without requiring comprehensive programming instructions. This black box approach to problem-solving is critical. Human programmers are finding it increasingly complex and time-consuming to write algorithms required to model and solve data heavy problems. Even when we do construct a useful routine to process big data sets, it tends to be extremely complex, difficult to maintain and impossible to test adequately.

 

Modern machine learning and AI algorithms, along with properly considered and prepared training data, are able to do the programming for us.

 

 

Overview

Intelligence: the ability to perceive information, and retain it as knowledge to be applied towards adaptive behaviors within an environment or context.

 

This Wikipedia definition of intelligence can apply to both organic brains and machines. Intelligence does not imply consciousness, a common misconception proliferated by science fiction writers.

 

Search for AI examples on the internet and you’ll see references to IBM’s Watson. A machine learning algorithm which was made famous by winning the TV quiz show Jeopardy in 2011. It has since been repurposed and used as a template for a diverse range of commercial applications. Apple, Amazon and Google are working hard to get a similar system in our homes and pockets.

 

Natural language processing and speech recognition were the first commercial applications of machine learning. Followed closely by other automated recognition tasks (pattern, text, audio, image, video, facial, …). The range of applications is exploding and includes autonomous vehicles, medical diagnoses, gaming, search engines, spam filtering, crime fighting, marketing, robotics, remote sensing, computer vision, transportation, music recognition, classification…

 

AI has become so embedded in the technology that we use, it is now not seen by many as ‘AI’ but just an extension of computing. Ask somebody on the street if they have AI on their phone and they will probably say no. But AI algorithms are embedded everywhere from predictive text to the autofocus system in the camera. The general view is that AI has yet to arrive. But it is here now and has been for some time.

 

AI is a fairly generalised term. The focus of most research is the slightly narrower field of artificial neural networks and deep learning.

 

How your brain works

The human brain is an exquisite carbon computer estimated to perform a billion billion calculations per second (1000 petaflops), while consuming around 20 Watts of power. The Chinese supercomputer, Tianhe-2 (as the time of writing the fastest in the world) manages only 33,860 trillion calculations per second (33.86 petaflops) and consumes 17600000 watts (17.6 megawatts). We have some way to go before our silicon creations catch up to evolutions carbon ones.

 

The precise mechanism that the brain uses to perform its thinking is up for debate and further study (I like the theory that the brain harnesses quantum effects, but that’s another article). However, the inner workings are often modelled around the concept of neurons and their networks. The brain is thought to contain around 100 billion neurons.

 

 

Neurons interact and communicate along pathways allowing messages to be passed around. The signals from individual neurons are weighted and combined before activating other neurons. This process of messages being passed around, combining and activating other neurons is repeated across layers. Across the 100 billion neurons in the human brain, the summation of this weighted combination of signals is complex. And that is a considerable understatement.

 

But it’s not that simple. Each neuron applies a function, or transformation, to its weighted inputs before testing if an activation threshold has been reached. This combination of factors can be linear or non-linear.

 

The initial input signals originate from a variety of sources… our senses, internal monitoring of bodily functions (blood oxygen level, stomach contents…). A single neuron may receive hundreds of thousands of input signals before deciding how to react.

 

Thinking or processing and the resultant instructions given to our muscles are the summations of input signals and feedback loops across many layers and cycles of the neural network. But the brain’s neural networks also change and update, including modifications to the amount of weighting applied between neurons. This is caused by learning and experience.

 

This model of the human brain has been used as a template to help replicate the brain’s capabilities inside a computer simulation… an artificial neural network.

 

Artificial Neural Networks (ANNs)

Artificial Neural Networks are mathematical models inspired by and modelled on biological neural networks. ANNs are able to model and process non-linear relationships between inputs and outputs. Adaptive weights between the artificial neurons are tuned by a learning algorithm that reads observed data with the goal of improving the output.

 

 

Optimization techniques are used to make the ANN solution to be as close as possible to the optimal solution. If the optimisation is successful the ANN is able to solve the particular problem with high performance.

 

An ANN is modelled using layers of neurons. The structure of these layers is known as the model’s architecture. Neurons are individual computational units able to receive inputs and apply a mathematical function to determine if messages are passed along.

 

In a simple three-layer model, the first layer is the input layer, followed by one hidden layer and an output layer. Each layer can contain one or more neurons.

 

As models become increasingly complex, with more layers and more neurons, their problem-solving capabilities increase. If the model is too large for the given problem, however, then the model cannot be optimised efficiently. This is known as overfitting.

 

The fundamental model architecture and tuning are the major elements of ANN techniques, along with the learning algorithms to read in the data. All the components bear the performance of the model.

 

Models tend to be characterized by an activation function. This is used to convert a neuron’s weighted input to its output activation. There is a selection of transformations that can be used as the activation function.

 

ANNs can be extremely powerful. However even though the mathematics of a few neurons is simple, the entire network scales up to become complex. Because of this ANNs are considered ‘black box’ algorithms. Choosing ANN as a tool to solve a problem should be done with care as it is not possible to unpick the system’s decision making process later.

 

Deep Learning

Deep learning is a term used to describe neural networks and related algorithms that consume raw data. The data is processed through the layers of the model to calculate a target output.

 

Unsupervised learning is where deep learning techniques excel. A properly configured ANN is able to automatically identify features in the input data important to achieve the desired output. Traditionally the burden of making sense of the input data usually falls to the programmer building the system. However, in deep learning setup, the model itself can identify how to interpret the data to achieve meaningful results. Once an optimised system has been trained the computational, memory and power requirements of the model is much reduced.

 

Put simply, feature learning algorithms allow a machine to learn for a specific task using well-suited data… the algorithms learn how to learn.

 

Deep learning has been applied to a wide variety of tasks and is considered one of the innovative AI techniques. There are well designed algorithms suitable for supervised, unsupervised and semi-supervised learning problems.

 

Shadow learning is a term used to describe a simpler form of deep learning where feature selection of the data requires upfront processing and more in-depth knowledge by the programmer. The resultant models can be more transparent and higher performance at the expense of increased time at the design stage.

 

Summary

AI is a powerful field of data processing and can yield complex results more quickly than traditional algorithm development by programmers. ANNs and deep learning techniques can solve a diverse set of difficult problems. The downside is that the optimised models created are black-box and impossible to unpick by their human creators. This can lead to ethical problems which data transparency is important.

 

Source: Medium

Will Cloud replace traditional IT infastructure?

Will Cloud replace traditional IT infastructure?

As cloud infrastructure offerings gain more popularity, the debate on the raison d'etre of on-premise IT infrastructure has grown. Obviously, there are two sides of the debate. While one group foresees on-premise IT infrastructure fading into oblivion, the other group believes – challenges notwithstanding – traditional IT infrastructure will remain relevant.

As cloud infrastructure offerings gain more popularity, the debate on the raison d'etre of on-premise IT infrastructure has grown. Obviously, there are two sides of the debate. While one group foresees on-premise IT infrastructure fading into oblivion, the other group believes – challenges notwithstanding – traditional IT infrastructure will remain relevant.

 

Data corroborates the fact that cloud infrastructure has been becoming more popular with increasing adoption. The popularity can be partly attributed to the problems with traditional enterprise infrastructure such as cost and management problems. However, it does not seem realistic that all enterprise infrastructure will move to the cloud. Organizations will likely carry out due diligence and evaluate the proposition on a case-by-case basis. (To learn more about how the cloud is changing business, check out Project Management, Cloud Computing Style.)

The Hype Around the Cloud

There certainly appears to be some hype around cloud, especially on its potential to replace the traditional IT infrastructure. There was recently a debate on this topic sponsored by Deloitte. Obviously, there are two sides of the debate. While one side appeared bullish on the potential replacement of traditional IT infrastructure, the other side took a more balanced view. Let us consider both views:

For Cloud Replacing Traditional IT Infrastructure

This side of the debate focused on eliminating the cost and hassles associated with enterprise architecture (EA). Maintaining the EA involved many different activities which are viewed as complex, costly and avoidable. There is an opportunity to move everything related to EA to the cloud and reduce hassles and costs significantly. (For more on infrastructure, see IT Infrastructure: How to Keep Up.)

Against Cloud Replacing Traditional IT Infrastructure

Jobs and processes in the cloud cannot be treated as standalone entities. EA will still have a role to play in managing the relationships and dependencies between mission, technologies, processes and business initiatives. Scott Rosenberger, partner at Deloitte Consulting, takes a more balanced view. According to Rosenberger, "No matter what tool you use, the core problem isn't the technology. It's in defining the relationships between all the different components of their vision, from business processes to technology. And that's where EA comes in."

According to David S. Linthicum, noted author,

Cloud computing does not replace enterprise architecture. It does not provide "infinite scalability," it does not "cost pennies a day," you can't "get there in an hour" – it won't iron my shirts either. It's exciting technology that holds the promise of providing more effective, efficient, and elastic computing platforms, but we're taking this hype to silly levels these days, and my core concern is that the cloud may not be able to meet these overblown expectations.

Problems of Traditional IT Infrastructure

Both exasperation with EA limitations and cost considerations have been behind the serious consideration of the cloud infrastructure proposition. Whether we are choosing something even worse is a different debate. EA is a practice which, if implemented well, could yield many benefits. However, it is unable to realize its potential because of certain problems:

  • EA is a separate practice and requires a practice-based management. Yet, organizations put people in charge of EA who are people-focused and not practice-focused.
  • Implementing quality EA requires a deep and broad understanding of EA and its role in the organization. For that, a broader planning and architecture is required, right from the start. However, many different ad hoc architectures are created based on situations, and that can completely jeopardize the broader EA goals.
  • The main problem with many EA architects is their approach to businessproblems. While the technical acumen of the architects cannot be questioned, they often lack the ability to take a broader view of the business problems and how the EA can solve them. The architects are too deep into the technical nuances, which prevents them from accepting other business perspectives.
  • Many EAs are too complex and rigid. This prevents them from accommodating changes necessitated by changes in business situations. Many head architects tend to forget that the main focus of EA is on business and not on unnecessary technical stuff. According to John Zachman, the founder of modern EA, "Architecture enables you to accommodate complexity and change. If you don't have Enterprise Architecture, your enterprise is not going to be viable in an increasingly complex and changing external environment."

Is Cloud the Solution?

The way forward is to have a balance and not drastically change your IT infrastructure strategy. You also need to seriously consider the issue of confidentiality and security of data. Probably the best approach would be to consider the feasibility of moving EA to the cloud in phases. For example, you could divide your EA into logical areas such as software applications and servers and consider their cases individually. For example, the following categories could be used:

  • Software applications, which can include productivity suites like OfficeSQL Server, Exchange email, VMware ESX ServerSharePoint, finance programs (like QuickBooks Server), or an enterprise search program.
  • Service areas, which can include functions such as authentication mechanisms, monitoring, and task schedulers. For example, you can certainly consider replacing complex in-house services such as Active Directory with online services such as Windows Azure Active Directory.
  • Storage can be a tricky proposition because you store a lot of data which can be confidential. So, you need to think hard about whether or not you want to move that data out and allow a third party to take care of it. For example, if your business handles credit card data, it is extremely risky to hand over storage to another entity.

Conclusion

The way forward should be a balance between cloud and in-house architecture. Not all organizations are going to move to the cloud because of their unique considerations. It is rather simplistic to think that all IT infrastructure will just move to the cloud; it is far more complex than that. Studies show that a lot of talk about moving to the cloud is just that – talk. Companies will decide on cloud adoption depending on their data security, cost and benefits, relevance and other considerations. Three scenarios are possible: total, mixed or non-adoption of cloud.

At the same time, it cannot be denied that cloud-based infrastructure is going to be a major force very soon. So much so that major IT infrastructure providers are expecting a slowdown. The Research firm 451 Group finds that cloud providers such as Amazon Web Services are going to grow at an exponential rate. But even in the face of growing cloud adoption, EA is not going to go away anytime soon.

Source: Techopedia

Aberdeen to Miami and back again without any money

Aberdeen to Miami and back again without any money

Alisanne Ennis travelled over 10,000 miles without a single penny to her name in the hopes of raising money for Marie Curie and travelling as far as she could without any money.   Alisanne works for Accenture who encourage their employees to take 3 days every year and dedicate them to helping in the community, upskilling people or supporting a charity.

Alisanne Ennis travelled over 10,000 miles without a single penny to her name in the hopes of raising money for Marie Curie and travelling as far as she could without any money.   Alisanne works for Accenture who encourage their employees to take 3 days every year and dedicate them to helping in the community, upskilling people or supporting a charity.

 

Alisanne set off on May 25th from Huntly in Aberdeenshire dressed in her Marie Curie yellow T-shirt and Marie McCoo (A Marie Curie bear) “Full of optimism and hope that whoever I met along the way would believe in me and donate to the cause.”  Alisanne added that she was not disappointed as she received donations from all sorts including passengers on her BA flight to London, a stranger at JFK Airport, the NYPD and many more. Alisanne received £4,200 in donations for Marie Curie, who provide care and support for people living with any terminal illness, and their families.

 

What was your motivation behind the trip?

I have witnessed the pain and suffering of not only the patients but their family when someone is diagnosed with a terminal illness. I just wanted to see if I could help in some way

 

The idea of travelling as far as you could without money, where did the idea come from?  

Last year I was heading to Chicago for a big conference.   When I arrived at the airport I realised I had left my purse at home. I had to take a leap of faith and get on the plane in the knowledge that some of my colleagues would help me out when I got to my final destination.  I managed to get to Chicago without any money, but my recent trip was a totally different ball game.

 

What were most people’s reaction when you told them what you were doing?

Mad, Brave, Inspirational.   Having travelled every week for the last 2 years to deliver a large global project in Switzerland,the last thing I should have been thinking about was getting on another plane… 

 

What was the wildest story that came from it?  

Being picked up by the police in NYC at Grand Central Station for playing my Ukulele and singing Irish Ballads.   I managed to make $10 in 10 mins so the police threw in $2 each so I could buy myself a Shake Shack burger and chips – heaven!

What was the toughest challenge you faced? 

Not having any money in Miami – not a great place to be with no money.  People there weren’t particularly friendly or helpful. I managed to find a hotel which gave me a free breakfast and a margarita every day, so I lived on breakfast bars and nuts!

 

What inspired you to tough out the worst parts?

I’m healthy and happy and not facing the pain and suffering that people with a terminal illness have to deal with every day

 

Would you do It again?

Round the world – relay style…   Never say never!

 

Alisanne added that she was humbled by the generosity and kindness shown by the majority of people she told her story to. Which lead to the money she raised paying for 210 hours of care for people in the community with terminal illnesses and their families.

“I wouldn’t have achieved my goal if I didn’t get the support and encouragement from friends, family, local and other businesses around the UK. Many thanks to Hanson Regan for supporting the cause and believing in me.”

 

If you would like to support the cause you can donate to Marie Curie directly, you can find their donation page here. To get involved you can search their charity events to find something in your local area.

Countering counterfeit drugs with Blockchain

Countering counterfeit drugs with Blockchain

In addition to posing a health risk to patients harmed by placebos or even harmful ingredients in the fake drugs, counterfeits add up to a major loss for the pharmaceutical industry to the tune of hundreds of billions a year. Aside from concerns about harm and loss, new legal requirements that demand traceability for drugs are kicking in.

The Effects of Counterfeit Drugs

In addition to posing a health risk to patients harmed by placebos or even harmful ingredients in the fake drugs, counterfeits add up to a major loss for the pharmaceutical industry to the tune of hundreds of billions a year. Aside from concerns about harm and loss, new legal requirements that demand traceability for drugs are kicking in.

Counterfeit drugs have been identified as a persistent global problem since 1985. The World Health Organization (WHO) estimates that around 10 percent of drugs found in low to middle income countries are counterfeit. That translates into the deaths of tens of thousands of people with diseases who took medication without the necessary active ingredient to treat their conditions. (To learn more about how tech is influencing the drug industry, see Big Data's Influence in Medicine and Pharmaceuticals.)

Current Conditions Favor Counterfeiting

According to Harvey Bale, Ph.D., of the Organization for Economic Co-operation and Development (OECD), counterfeits persist because of four conditions that persist:

  1. Fakes can be made relatively cheaply (at least as profitable as narcotics – lower risk).
  2. Many countries, especially in the developing world, lack adequate regulation and enforcement.
  3. Even in the industrialized countries, the risk of prosecution and penalties for counterfeiting are inadequate.
  4. The way in which medicines reach the consumer is also different from other goods: The end user has little knowledge of the product.

Limited Solutions Applied

As the problem is particularly rampant in West Africa, a Ghanaian entrepreneur named Bright Simons offered a verification solution through his company, mPedigree. A customer can be assured that the medication offered for sale is genuine if the code they find on the bottle checks out by calling a free number.

The Pedigree approach to spotting counterfeits works only on the final step of the drug supply chain, and it still puts authentication into one central source rather than offer the transparency of a public ledger, which is only possible with blockchain technology.

The Promise of Blockchain

IBM laid out some of the ways blockchain can improve the healthcare industry in Blockchain: The Chain of Trust and its Potential to Transform Healthcare – Our Point of View. The premise is that blockchain serves as “an Internet of Value” because what is in the blockchain record cannot be altered, and so can be relied upon as trustworthy.

Having that kind of authentication in place would assure consumers they are getting the benefits of the drugs they are prescribed and would benefit pharma companies in setting up a completely traceable supply chain.

Compliance Benefits

At the end of 2013, Obama passed the Drug Supply Chain Security Act (DSCSA), which calls for a a national track-and-trace system by which manufacturers must affix product identifiers to each package of product that is introduced into the supply chain. As companies were granted a period of ten years to get to the point of compliance with the new regulations, they have to gear up for a reliable solution to accurately track their supply chains by 2023. AI is also having a big influence on medicine.

Blockchain Features Secure Trust

Tapan Mehta, market development executive, healthcare and life sciences services practice, at DMI was quoted in Healthcare IT News, saying, “A blockchain-based system could ensure a chain-of-custody log, tracking each step of the supply chain at the individual drug or product level.”

“With blockchain, records are permanent and cannot be altered in any way, ensuring the most secure transfer of data possible,” Mehta explained, thanks to a ledger that is both decentralized and public. That’s what gives blockchain the dual distinction “transparency and traceability.”

Working off of that would not only make it possible to distinguish the real thing from the counterfeit but, “to trace every drug product all the way back to the origin of the raw material used to make it.”

Another advantage it offers is recovery. He explained, “In the event that a drug shipment is disrupted or goes missing, the data stored on the common ledger provides a rapid way for all parties to trace it,” to the last identified handler.

Building the Pharma Blockchain

The blockchain solution is not just a hypothetical idea. In 2017, Chronicled set up a joint venture to build and test a prototype system to function as an industry model under the name of the MediLedger Project. The project included representatives from major companies like Genentech, the Roche Group, Pfizer, AmerisourceBergen, and McKesson Corporation.

The MediLedger Project was built on a Parity Ethereum client, which worked to achieve the aims of tracking for DSCA, according the report on the project’s progressfor the year. The prototype demonstrated the possibility of a secure blockchain network capable of processing over 2,000 transactions per second.

The project showed that a blockchain system can validate “the authenticity of product identifiers (verification) as well as the provenance of sellable units back to the originating manufacturer.” In addition to countering counterfeits, that record at every step can be useful in “allowing for expedited suspect investigations and recalls.”

The project report also asserts that there are many “additional business applications to the pharmaceutical industry, allowing for compounding benefit for this industry once such a platform is established.” However, that substantial return on the blockchain investment will only be possible if there is “strong participation from all industry stakeholders (manufacturers, wholesalers, dispensers, service providers, etc.).”

Given that what is at stake is not just billions of dollars for the pharma industry, but the lives and health of millions of people who have been prescribed medication, all the involved parties should come together to solve the problem of counterfeit drugs. If the difficulties in accountability and identification for drug production could be remedied by blockchain, it should be universally implemented.

Source: Techopedia

The pros & cons of Intranet

The pros & cons of Intranet

Most companies incorporate an intranet into their business in some capacity. An intranet is a private computer network that operates within an organization and facilitates internal communication and information sharing with the same technology used by the internet. The major difference is that an intranet is confined to an organization, while the internet is a public network that operates between organizations.

With an effective intranet infrastructure, an organization can reap benefits across the board. In fact, an intranet can significantly improve efficiency and performance. Still, there are risks associated with setting up an intranet. Here, we'll discuss the pros and cons. 

Most companies incorporate an intranet into their business in some capacity. An intranet is a private computer network that operates within an organization and facilitates internal communication and information sharing with the same technology used by the internet. The major difference is that an intranet is confined to an organization, while the internet is a public network that operates between organizations.

With an effective intranet infrastructure, an organization can reap benefits across the board. In fact, an intranet can significantly improve efficiency and performance. Still, there are risks associated with setting up an intranet. Here, we'll discuss the pros and cons. 

 

Having access to all the resources you need to perform your job tasks is an essential aspect of productivity. If you have to constantly take time out to find required information or are unclear of recent changes to your responsibilities, then that will have a negative impact on your productivity. An intranet acts as a one-stop shop for all workers. It provides them with all the relevant announcements, tools and information to perform their jobs. Easy access is provided to workers by placing all the important information and tools on their individual desktops, which allows them to work smarter and faster.

Allows for Greater Collaboration

An intranet provides effective collaboration tools that are adaptable to a range of personal styles and communication methods. Every company has a diverse range of employees – each with different working styles and communication methods – and each individual has his or her own work style. Thus, collaboration between workers can be difficult.

An effective intranet solution provides separate areas for each department, allowing workers to collaborate and share relevant departmental information. An intranet also facilitates cross-department communication, which breaks down barriers and enables open communication between management and departmental levels. This functionality gives individuals opportunities to share potentially beneficial ideas and perspectives.

Provides a Social Networking Platform

Creating a social work environment is important because it creates stronger relationships between employees, leading to greater job satisfaction and productivity. Most intranet solutions utilize popular social media functionalities that allow staff to display their personalities on their intranet pages. Employees and management can share personal interests, hobbies and other aspects of their personal lives, providing a more personally interactive platform. Relationships forged through an intranet's social networking capabilities can positively impact staff job performance and collaboration. (For different corporate use of social networking, see CRM Meets Social Media.)

Simplifies Decision Making

Access to vital information is crucial to effective decision making. An intranet allows staff to share information and ideas.

Streamlines Data Management

Managing documents is key to any organization. With an intranet, you can easily upload and organize documents that can be accessed at any time. Employees can securely collaborate on projects and data. Document and information availability gives a company a transparent culture, which empowers staff.

Intranet Cons

Potential for Security Risks

Because you are providing open access to sensitive data, it is important to establish an effective security system via a gateway or firewall. Without appropriate security measures, your private data may be accessed by an unauthorized party – putting your company at risk.

Can Be Time Consuming and Costly

Despite the advantages of setting up an intranet, it can be a costly procedure, as dedicated teams must be assigned tasks to set up and configure the intranet for an organization. Additionally, an intranet is only effective when staff members fully understand how it should be used. It is equally important to ensure that staff know all the available intranet functionalities. This means resources must be used to train staff so they can adapt and continue performing their job duties. Without effective training, an intranet implementation can turn into a nightmare because it can impede staff's ability to perform their jobs – ultimately causing losses to the company.

Routine maintenance is a must to keep an intranet organized and functional. Posting regular content also is an important aspect of maintaining an intranet, as it ensures employees check their intranet regularly for new information. This can be a time-consuming process and requires dedication from the management team.

Can Be Counterproductive

An intranet can be an abundant and easily accessible resource for information. However, uploading excessive information in an unorganized manner can be counterproductive and create confusion between employees. Additionally, if information is not organized and cannot be easily navigated, productivity will be negatively impacted.

An effective intranet solution can have a profound impact on organizational productivity, collaboration and data management. Employees have the ability to interact and share information with ease, facilitating effective collaboration on projects – paving the way for increased productivity. With available desktop resources and tools, each employee can easily access everything they need to perform their job. All of these positive benefits stem from dedicating time and resources to setting up an effective intranet.

Source: techopedia

5 most in-demand tech jobs in 2018

5 most in-demand tech jobs in 2018

As technology becomes more and more integral within our everyday lives, it is only natural that the same can be seen in the workforce. Tech jobs have become the highest in demand jobs and this trend is increasing daily.

As technology becomes more and more integral within our everyday lives, it is only natural that the same can be seen in the workforce. Tech jobs have become the highest in demand jobs and this trend is increasing daily.

According to Cyberstates 2017, an annual analysis of the tech industry by technology association CompTIA, more than 7.3 million workers for, the tech-industry workforce as of 2017. This survey also looked at the unemployment rate in the tech industry and found that it Is far lower than the national average in the US.

These results hint at a positive trend in the tech industry which means if you’re looking for a well-paying job in a growing industry, a tech job might be your best options. With that in mind let’s take a look at the 5 most in-demand tech jobs of 2018.

Blockchain experts.

 

Blockchain was a hot buzzword of 2017, along with bitcoin and cryptocurrency. While cryptocurrency has been a known concept since the mid-2000’s, blockchain and bitcoins became the hit topic of conversation when they became a viable form of investment in late 2017.

With that in mind, 2018 is all abut Blockchain as it was estimated that the tech behind blockchain will be used by companies across all industries. That’s why Blockchain experts and analysts who understand the details of the blockchain systems will be in huge demand worldwide.

The exciting thing about Blockchain is the technology behind it can be deployed to sectors ranging from e-voting to the patent industry in the coming years.

Blockchain experts who have a background in computer science coupled with good analytical logical skills will be aware of how blockchain can be incorporates into different scenarios. A blockchain expert like this will be looking at an average base salary of £74,000.

Cloud Engineer

Cloud computing was one of the hottest trending topics of 2017 and the influence of Cloud has not deteriorated at all in 2018. In fact, more and more tech solutions are based on the principles of Cloud computing and the number is only expected to rise.

Pretty much every big application and popular software has their databases in the cloud. For this reason, there is a high demand for Cloud specialists and Cloud engineers. Their primary responsibility would involve designing, planning, managing, maintaining and supporting the varying software that run on Cloud based solutions.

Cloud engineers should be experiences with all major cloud solutions like Azure, AWS among others with popular coding frameworks like PHP, Node, Python etc. Average base salary for Cloud Engineers is roughly £82,000.

 

AI Engineers

Artificial Intelligence has grown substantially since its initial debut in the 70’s, however due to its popularity in the last decade or so it has been making leaps and strides as it has seen rapid innovation and constant development. For those interested in working as an AI engineer the demand outweighs the supply.

Due to the ever-evolving nature of AI, there’s a near constant need for AI engineers who can break new barriers and take us further than just self-driving cars.

AI engineers need a background in software engineering, the most sought after programming language would be Python followed by C#, C++ and other frameworks. A successful AI engineer would also need to have a curious mind and a problem solving aptitude.

AI engineers are looking at a base salary of around £88.000.

 

Mobile application Developer

There’s an app for everything from sharing your Acai Bowl with friends to ordering a Taxi, Smartphones and mobile apps have it all and are everywhere and with each passing day there are innovative people coming up with more and more ideas. This makes being a Mobile Application Developer a high demand title.

Mobile Application Developers can become highly skilled within their field either with select platforms like IOS and Android or can be experts in hybrid platforms like PhoneGap and React Native. Regardless of what you choose, a good pay and constant demand is what you can expect.

A role like this includes writing code and developing applications from scratch, therefore a background in programming will be a priority.

A senior App developer will likely be earning upwards of £82,000 a year.

 

Cybersecurity Expert

Internet touches every aspect of our day to day lives and the whole world has become an every-growing cyberspace. Everyone who is active on the internet will have their personal information floating about somewhere, which is why Cybersecurity should be a top priority for all companies, therefore Cybersecurity Experts are in high demand.

Where there’s data there is also a chance of it being misused, erased or tampered with. Making sure there is not unauthorised access is the Cybersecurity experts job. They deal with preventing cyber attacks through their expertise on the subject and their in-depth knowledge of databases, networks, hardware, firewalls and encryption.

Before you can get hired as a Cybersecurity expert there are sever certifications that tend to the specialisation of cybersecurity and are needed to be acquired. Having a high level of attention to detail as well as a fine eye for detecting anomalies in a system are must haves when it comes to being a cybersecurity expert.

You can expect to be paid an average base salary of £79,000 a year.

These are just a few of the most sought-after tech jobs for 2018, being an ever-evolving industry means there will be something for everyone.

Source: Irish Tech News

If you’re looking to take the next steps, contact Hanson Regan today Info@hansonregan.co

Master Pool with AR Pool Live Aid

Master Pool with AR Pool Live Aid

Poolaid is going to help you become masters at all pool games. 

Poolaid is going to help you become masters at all pool games. Especially fantastic news for those of us that are challenged at controlling a cue stick, this will be the AR experience for you. 

A team of students from the University of the Algarve in portugal are the designers behind it. Poolaid creates real-time light predictions of billiard shots. The projector hangs above the table and analyzes the positions of the billard ball, then detects lines that correspond to it in relation to the cue, which is projects onto the table. 

 

The students had this to say:

"We developed an algorithm that tracks and analyzes the ball's position. It detects lines that match up with the cue. The computer's connected to the projector too, so it updates right away."

Poolaid isn't the only projection mapping tool for pool that exists. there are many AR experiences that are interactive with a pool table. Obscura digital released a stunning billards experience with an interactve media production called Cuelight

More recently, Openpool launched a kickstarter for their project of projection mapping kit for billards which allows you to play billards with beautiful interactive visual effects.

AR is entering exciting realms, making the world around us digitally interactive and I am looking forward to what else it has in-store for us. 

Following up CVs

Following up CVs

Recruitment experts suggest that every application should be followed up within 7-10 days if you have not had a personalised response.

The moment you begin sending out CVs, start keeping a log and a set up a tracking method.

Recruitment experts suggest that every application should be followed up within 7-10 days if you have not had a personalised response. If you wish to follow up before then, e-mail them a quick note asking if they received and were able to read your CV, (or if they require a different format for their database), or better still, pick up the phone.

CV follow up:

• After you've sent your CV to contacts and acquaintances asking for their support during your job search.
• After you've sent cover letters and CVs to employers, regardless of whether they have a specific job opening.
• After you've had a networking meeting with someone.

How to follow up:

By (short!) email:

• Put your full name and the title of the position you've applied for in the subject line.
• Write a professional note that reiterates your qualifications and interest in the job.
• Attach your resume again.
• Include your full name in the file name of your resume.
• Changes to the Companies Act 2006 mean you must include your Company Name, Registered Address, Company Registration Number and Place of Registration in all your corporate emails.

By phone:

• Keep it short and sweet. Introduce yourself and remind the recruiter that you submitted a resume recently. Make sure you state exactly what job you're interested in. You can also ask if they received your resume and if they're still considering candidates for the position. In a difficult market, with more contractors chasing jobs, a phone call is likely to help you stand out more.
• Always try a few times to speak to someone if you get a recorded message at first.
• Try to strike a balance when following up – call too many times and you may achieve the opposite of your desired reaction!

If you’re looking to take the next steps, contact Hanson Regan today Info@hansonregan.com

Source: ContractorUK

Why don't businesses care about cyber security?

Why don't businesses care about cyber security?

Businesses are still not getting the Cyber security basics right and they are not learning from past incidents. According to Troy Hunt, Pluralsight author and security expert, few businesses are learning from others’ past mistakes which is proven by cyber security incident after incident.

Businesses are still not getting the Cyber security basics right and they are not learning from past incidents. According to Troy Hunt, Pluralsight author and security expert, few businesses are learning from others’ past mistakes which is proven by cyber security incident after incident.

“A good example of this is the BrowseAloud compromise that hit thousands of government websites and organisations in the UK and around the world,” he told Infosecurity Europe 2018 in London.

Despite the fact this had a fairly significant impact, many organisations have not learned the lesson and most websites are not applying a free and easy fix, including those belonging to some UK and US government departments and some major retailers.”

 

The problem was caused by the corruption of a file in the Browsealoud website accessibility service that was automatically executed in the browsers of visitors to the site.

In addition to running the BrowseAloud service in the browser, the corrupted file also launched cryptocurrency mining software to enable the attackers to tap into the computing resources of visitors to affected sites to mine Monero cryptocurrency for the benefit of the attackers.

“This can be stopped with the use of a content security policy (CSP), which is just a few lines of code organisations can add free of charge to their websites to ensure that only approved scripts run automatically when they use third party services like BrowseAloud,” said Hunt.

“Despite the incident highlighting this issue, barely anyone is using CSPs. In fact, only 2.5% of the world’s top one million websites currently use CSPs to defend against rogue content,” he said.

Hunt said a cryptocurrency miner was perhaps the one of the “most benign” forms of content attackers could have chosen to launch through the compromised BrowseAloud file. “In reality, we got off lightly this time around, but we have not seen any significant action by website owners in response.”

This incident underlines the fact that many websites use services and content from third parties, which represents a security risk because attackers could compromise this is the way that the BrowseAloud file was compromised and execute malicious code through millions of websites.

“An analysis of the US Courts website reveals that its home page represent 2.3Mb of data, which is the same size as the entire Doom game, and that almost a third of that is scripts, which is rather a lot of active content that is automatically loaded into visitors’ browsers, especially when you consider that you can do just about anything with JavaScript,” said Hunt.

Compounding the problem, he said, is that most organisations are poor at detecting malicious activity, which was well illustrated by the Sony Pictures cyber attack in 2014. “Various systems were compromised at the same time and different types of data stolen, but the first the company knew of it was when employees attempted to login and were greeted with a message saying: ‘You’ve been hacked’.”

According to Hunt, who runs the HaveIBeenPwned website that aggregates breached records and makes them searchable for those affected, most organisations either have no idea that they have been hacked, and even if they do, they have no idea what data may have been stolen.

“Many of them only find out when they get an email from me telling them that their data is available on the internet,” he said, adding that this underlines that fact that detection is often difficult. “But choosing a breach detection tool can be equally difficult. There are so many suppliers selling breach detection solutions, but it is difficult to work out what actually works.”

Organisations in the dark

Another indicator that organisations are not covering the basics, said Hunt, is that many organisations still have no idea of what company files are exposed to the internet.  

According to security firm Varonis, 21% of all company folders are open to anyone on the internet, and of those open folders, 58% contain more than 100,000 files.

In summary, Hunt said organisations need to assess the state of their cyber security and ensure that at the very least they are addressing the basics because simple, well-known attacks are still working.

Organisations also need to understand that it is easier than ever for cyber attackers to make money out of their data thanks to the advent of cryptocurrencies.

Next, organisations need to understand that their websites and those that their employees visit to do their jobs are made up of code from multiple sources, and any one of these could represent a security risk.

And finally, in the light of the fact that choosing effective and affordable security solutions, organisations should not overlook those that are free and easy to implement.

Source: Computerweekly

BBC will be showing the Wolrd Cup in Virtual Reality

BBC will be showing the Wolrd Cup in Virtual Reality

The BBC will be trialling VR and ultra-high definition technology during its coverage of the 2018 FIFA world cup in Russia. This will be among the broadcaster’s Cross platform coverage, that will include TV, radio and digital channels.

The BBC will be trialling VR and ultra-high definition technology during its coverage of the 2018 FIFA world cup in Russia. This will be among the broadcaster’s Cross platform coverage, that will include TV, radio and digital channels.

Matthew Postgate, BBC chief technology and product officer, said: “The BBC has brought major live broadcasting breakthroughs to UK audiences throughout the history of the World Cup. From the very first tournament on TV in 1954 and England’s finest hour in 1966, to the first colour World Cup in 1970 and full HD in 2006. Now, with these trials, we are giving audiences yet another taste of the future.”

The BBC Sport VR – FIFA World Cup Russia 2018 app, which will be available to download for free on Apple, Android, Gear VR, Oculus Go and PlayStation VR, will enable users to watch the 33 matches the BBC is covering in real time.

The application allows various views of each game, including a virtual luxury private box or a seat behind one of the goals.

Viewers can also view live statistics about the game while it is in progress, or watch daily highlights and other on-demand content when there is no game taking place.

The BBC has been working on a number of research and development projects in recent years to prepare for a digital future and cater to consumers who increasingly expect to have customised content delivered to them any time on any device.

This includes the possibility of virtual reality TV in the future, as well as content based on a person’s interests and location.

For best performance when viewing the World Cup matches through VR, a connection of at least 10Mbit/s over WiFi is recommended, and when downloading the VR application, iOS 10 and above and Android 5 and above are needed.

BBC One’s 29 World Cup matches will be streamed in ultra-HD and high dynamic range (HDR) on BBC iPlayer for a limited number of first-come, first-served people – up to tens of thousands.

Recommended for those with a compatible ultra-HD TV and an internet connection of at least 40Mbit/s for the full 3,840-pixel ultra-HD or 20Mbit/s for 2,560-pixel ultra-HD, the stream will be available from the BBC iPlayer home screen once live coverage begins.

The BBC has developed the technology to make these HD streams available alongside Japanese broadcast NHK, a hybrid log-gamma version of HDR designed to improve picture quality.

The broadcaster plans to gather data about its HD trial to help develop its user experience through this medium, and make plans for the future, when people are likely to expect events to be streamed across the internet in high quality to large audiences.

As audiences become more tech-savvy, the BBC has been investigating ways that people might want to consume content in the future. For example, in 2016 the broadcaster spoke about work it was doing on holographic TV development, which could give people a more immersive viewing experience.

The BBC has also run a pilot alongside Microsoft to test how users could use voice control to navigate the BBC iPlayer app, and it aims to redesign its digital iPlayer service by 2020 to better reflect the current content rental and streaming trend.

Source: Computerweekly

Sunrise up Croagh Patrick

Sunrise up Croagh Patrick

We're supporting “Sunrise Up Croagh Patrick”, an annual get-together of friends who climb Croagh Patrick or walk nearby & cycle the Greenway, have a super time & raise funds for worthwhile charities fighting Neurological Diseases.

We're supporting “Sunrise Up Croagh Patrick”, an annual get-together of friends who climb Croagh Patrick or walk nearby & cycle the Greenway, have a super time & raise funds for worthwhile charities fighting Neurological Diseases.

 

 

When we say climbing, its not vertical, no ropes, no scrambling, with a bit of care, its well within the reach of most people. However this year at the same time there will also be a 4km or up to 11km low level walk near Croagh Patrick for those who can’t or choose not to climb.  On Sun July 1st some will be choosing to cycle from Achill or Mulranny to Westport along the Greenway. This is an optional extra and a trial event for this year.

It was initially organised by John Kelly (St Jarlaths 1979). We are proud to be sponsoring Sunrise up Croagh Patrick once again. The event has grown and attracted wider support from many great people who have been affected by Huntington’s Disease, Parkinson’s, Motor Neuron Disease & Dementia.

This year’s event is on 30 June 2018 (Climb, walk & dinner) + July 1st (Cycle) and we will be staying in the Westport Plaza Hotel for two nights from 29 June. Details can be found on the website.

So why not join us for this great weekend of activities in Westport, for the 4th annual #SunriseupCroaghPatrick event. Form a group of your colleagues and friends, or come on your own and mingle with the whole gang. Have a great fun and support very deserving charities.

Register for the event here or if you can’t make it, you can sponsor others who are making the trip.  

What gets a CV binned by an agent?

What gets a CV binned by an agent?

What’s in your CV can make the difference between being put forward for a role or not. But what key factors can ensure that yours stands out from the hundreds of others?

What’s in your CV can make the difference between being put forward for a role or not. But what key factors can ensure that yours stands out from the hundreds of others?

The most often-quoted rule-of-thumb is to keep your CV under two pages, or three at most. But arguably more important than length is ensuring that it is tailored for the role advertised. Particularly important is the front page summary, as if this doesn’t obviously match you to the role, you’re likely to be binned at the first hurdle.

“I once got knocked back for an enterprise architect role because the first page of my CV didn’t include the word ‘C++’,” says one contractor. “I spent the first ten years of my career doing C++, not that it’s relevant to the role anyway – and because it wasn’t on the front page, it didn’t get seen, and I didn’t get put forward.”

But not everyone agrees that a concise CV is required for every role. “Sometimes I like to see a bit more than that,” says recruitment agent Norman von Krause, “as I like to be provided with as much detail as possible, without going mad – particularly when people have had a lot of jobs.”

“Sometimes very senior roles can require more than two or three pages,” agrees Sarah, an IT recruiter for a large agency. “But in those cases the first page should make it very clear what qualifies you for the job.”

Other things guaranteed to get your CV filed in the bin include: “too much colour used, i.e. coloured fonts; fonts that are too wacky – it’s all a bit try-hard; and daft email addresses – stuff like sexybitch@hotmail.com”.

“Pointless information is a no-no on the first page,” says Sarah, and for experienced professionals this can include education details. “I’m not interested in the O-level in Biology you passed twenty years ago.”

So what are the things that will make sure your CV is seen by an agent – and seen by a potential client?

First, you need to make sure it can be found by an agent. Making sure that your CV hits all the potential search keywords – a sort of search engine optimisation in miniature – is perhaps the single most important thing you can do to make sure your CV is at least going to be found in the megalith databases of Monster and Jobserve, not to mention the various internal databases that recruitment consultancies use. So a WCF developer should make sure that their CV contains every possible combination of technologies, roles and acronyms that an agent searching for a WCF developer might look for – for example, .NET, Windows, Communication, Communications, Foundation, Developer, Programmer – and of course WCF.

As one agent wrote in the CUK forum recently, “A computerised search can scan CVs in more depth than I can, in far less time. “If I can do something in 5 minutes, or two hours, with the same net result, which am I going to do?”

“Computerised searches are a fact of life, especially in IT,” adds Sarah.

Most important of all, though, says von Krause, are “CVs that are clear and easy to read. Write about your duties and experience in detail but don't use too many words. We don't want to see an essay.

If you’re looking to take the next steps, contact Hanson Regan today Info@hansonregan.com

 

Source: ContractorUK

AI Today: Who is using it now and how?

AI Today: Who is using it now and how?

AI (Artificial Intelligence) is a versatile tool, but how is it currently being used in business?

Artificial intelligence is all the rage in the enterprise these days. Stories abound about all the gee-whiz capabilities it will bring to our personal and professional lives.

But like any technology, there is usually a fair amount of hype before the reality sets in. So at this point, it is probably worth asking: Who is using AI right now, and how?

AI (Artificial Intelligence) is a versatile tool, but how is it currently being used in business?

Artificial intelligence is all the rage in the enterprise these days. Stories abound about all the gee-whiz capabilities it will bring to our personal and professional lives.

But like any technology, there is usually a fair amount of hype before the reality sets in. So at this point, it is probably worth asking: Who is using AI right now, and how?

AI in Action

In a broad sense, says Information Age’s Nick Ismail, AI is already bringing five key capabilities to the enterprise:

  • Voice/Image Recognition: Applications range from accurately transcribing meetings and sales calls to researching the impact of branding, logos and other visuals on the web.
     
  • Data Analysis: Unstructured data in particular is very difficult to quantify. Using readily available tools, organizations are able to delve into the minutia of their operations, supply chains, customer relations and a wealth of other activities to gather intelligence that is both accurate and actionable.
     
  • Language Translation: Convert one spoken language into another in real time, an increasingly important tool for multi-national corporations.
     
  • Chatbots: Automate the customer experience with a friendly, responsive assistant that can intuitively direct inquires to the proper knowledge base.
     
  • Predictive Analysis: Accurately forecast key data trends, such as cash flows, customer demand and pricing.

To see some of these capabilities in action, check out the new website for Peach Aviation which features an automated response system that provides multi-language support for customer inquiries. The system runs on the Desse AI agent provided by SCSK ServiceWare Corp., and can respond in all languages serviced by the airline: Japanese, English, traditional and simplified Chinese, Cantonese, Korean and Thai. As well, it uses data analysis to continuously monitor questions and answers to provide steadily improved quality. The company reports that out of 100,000 inquiries received in late December and early January, the system was able to provide automatic responses to 87 percent.

Yet another example of AI in action is a joint project by NBCUniversal and CognitiveScale to discern the key elements in a successful Super Bowl ad. The companies used CognitiveScale’s Cortex platform to analyze three years’ worth of game-day commercials and various client-engagement data to derive actionable insights linked to key video concepts, attributes and themes. For instance, the research showed that comedic effects work best with sales messages, while uplifting tones are more effective for branding.

While AI will not write and produce the perfect ad itself, NBCUniversal’s SVP of Corporate Analytics and Strategy Cameron Davies said it provides greater insight into what works and what doesn’t.

“The CognitiveScale platform gives us the ability to consider new ad strategies for companies who want to ensure their ads will be successful when they invest in production and media buying,” he said.

CognitiveScale is also working with organizations in the financial, health care and retail industries by allowing video data to undergo the same analytics processes as voice, image and text.

Baddies Beware

AI is also turning into an effective crime-fighting tool, says Forbes’ Rebecca Sadwick. It turns out that one of the biggest hindrances to modern law enforcement is the bureaucratic inertia that exists in both public and private processes. AI helps overcome these hurdles, bringing much-needed clarity to highly organized criminal enterprises ranging from money laundering to human trafficking to terrorism.

One of the key ways AI helps solve crimes is by lowering the cost on private entities to oversee their transactions. Like any regulatory requirement, compliance is primarily a cost factor for organizations that are focused on profitability. Using third-party AI platforms specifically geared toward identifying suspicious data patterns, companies have not only lowered their costs but increased their chances of detecting nefarious activities. Prior to AI, it is estimated that nearly half of all financial crimes went unnoticed.

As well, banks and financial institutions that have deployed AI in this way actually help law-abiding citizens take part in fighting crime. Every time a legal transaction is processed, a learning algorithm is exposed to the normal patterns of money movement and is thus better equipped to identify transactions that break these patterns.

Technology is a two-way street, of course, so the same technology that is currently helping to fight crime can also be used to conduct it. With enough computing power, an intelligent system might be able to leverage the rising trend of micro-transactions by breaking up large transactions into numerous, perhaps millions, of smaller ones that are harder to detect and track. As well, quantum technology has yet to make its presence known in the criminal underworld (as far as we know, at least), which would open up an entirely new front in the war against cybercrime. (To learn more about quantum computing, see The Challenge of Quantum Computing.)

Clearly, we are in the earliest stages of AI development, so there will no doubt be numerous other ways in which it will affect mainstream enterprise processes as the market matures. Unlike earlier technologies, however, AI is expected to improve with age as it incorporates both human- and machine-generated data to forge a greater understanding of the environment it occupies and how best to navigate it.

And this is likely to be the most profound change of all: the end of lengthy development processes in which new features come out once a year (if that) and can only be implemented by taking infrastructure and data offline. In the future, digital systems will get better with age, all by themselves.

Source: Techopedia

3 Amazing examples of AI in action

3 Amazing examples of AI in action

The capabilities of AI are increasing by leaps and bounds, and machines are beginning to comprehend things at near human level. Some see this as revolutionary progress, while others look on it with caution.

What is the mind? Is it simply a collective sum of networked neural impulses? Is it less or more than that? Where does it begin and where does it end? What is its purpose? Is it the soul? These are questions that have haunted human consciousness for much of its existence. But in this increasingly digital age, we gain exciting new insight into the nature of consciousness by artificially simulating it.

The capabilities of AI are increasing by leaps and bounds, and machines are beginning to comprehend things at near human level. Some see this as revolutionary progress, while others look on it with caution.

What is the mind? Is it simply a collective sum of networked neural impulses? Is it less or more than that? Where does it begin and where does it end? What is its purpose? Is it the soul? These are questions that have haunted human consciousness for much of its existence. But in this increasingly digital age, we gain exciting new insight into the nature of consciousness by artificially simulating it.

Artificial intelligence is somewhat loosely defined, but can generally be understood as a subset of another field called biomimetics. This science (interchangeably referred to as "biomimicry") imitates natural processes within technological systems, using nature as a model for artificial innovation. In nature, evolution rewards beneficial traits by proliferating them throughout the natural ecosystem, and technology shares similar tendencies, in that the technology that yields the most useful results is that which thrives.

As machines develop the ability to learn, compute and act with a level of creativity and individual agency that is virtually human, we as people are confronted with increasingly complex but imminent questions surrounding the nature of AI and its role in our future. But before we delve too deeply into the semantics of artificial intelligence, let’s first examine three ways in which it is already beginning to manifest in our world.

Recognition

Human perception is like a set of input devices on a computer. Visual data hits the human retina and then flows through the optic nerve to the brain. Sound waves hit the outer and then middle ear before the inner ear begins the neuronal encoding process. Touch, smell and taste similarly transform external stimuli to internal neurological activity. And our memory serves as a database within which this sensory information can be cross-referenced, identified and put to use.

The computer reflects human anatomy in its configuration of input, transduction and storage. Cloud technology has evolved into a sort of collective consciousness that stores, vets and distributes shared knowledge and ideas. Image and sound recognition software use camera and microphone hardware to input and cross-reference data with the cloud, in turn outputting an explanation to the user of that which was seen or heard. Recognition apps like CamFind and Shazam basically serve as sensory search engines, while the fields of robotics and automated transportation build machines that use recognition technology to navigate and act within the world with unprecedented independence. (For more on AI's attempts to become more human, see Will Computers Be Able to Imitate the Human Brain?)

CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) has served as one of the most effective validation tools in internet security for many years now. It is well known for blocking automated password breaches with a challenge-response interface that has for long only been human-recognizable. However, a team known as Vicarious has managed to develop a breach for the software using a program that simulates human thought process. The node-based software assesses the CAPTCHA image in stages, and like a human mind, is able to break elements of the image into components that are compared with language characters in a database. CAPTCHA has long been emblematic of the difference between machine intelligence and human intelligence. But with Vicarious’s new innovation, the line between the two is being blurred.

Prediction

There is a great deal of economic incentive for predictive technology. The discipline is used extensively in marketing by gauging customer behavior and data in order to anticipate commercial activity and maximize profits. Analytics help businesses determine where to expend their efforts and achieve the most desirable results, and to help them make the predictions needed to compete in the modern digital economy. The technology is also implemented into some government and policing efforts, which some view as highly useful while others see it as potentially harmful, as the tactics could employ biased statistics and perpetuate discriminatory practices.

But with predictive analytics improving disciplines like medicine and environmental science, there also exists a great deal of potential for social good in both the private and public sectors. Predictive health IT systems work to improve accuracy and efficiency in health science, and elevate preventative medicine to a level that virtually automates it. Intelligent systems employ prediction in order to identify future benefits and avoid potential problems, and can provide assistance to people before they even realize that they need it. (To learn more about predictive analytics, see Predictive Analytics in the Real World: What Does It Look Like?)

Some technology business leaders prefer the term "augmented intelligence" over "artificial intelligence" and argue that threats posed by the technology are very minor compared with their potential benefits. However, some are not so optimistic.

Activism

There are many renowned scientists and technology innovators who believe that artificial intelligence can potentially have catastrophic consequences. Among them is Elon Musk, who now cochairs a nonprofit research organization called OpenAI. Musk, in fact, has stated that he believes artificial intelligence could be humanity’s greatest existential threat, and through OpenAI, he and his team are attempting to cultivate ideas and initiatives in AI that will be geared toward the greater public good. The organization intends to develop AI systems that are open-source friendly, and is currently focused on deep learning research.

Musk justifies the initiative by arguing that it is better to participate in artificial intelligence early enough in its development that it can be steered toward human progress rather than private gain, but without depending on regulation to dictate its terms and purpose. OpenAI maintains a vision for decentralized and crowdsourcedtechnology that maximizes AI’s potential benefits for humankind.

Conclusion

Whether or not the technology will benefit humanity is inherently difficult to predict. But one thing is almost certain: Whoever controls artificially intelligent technology in its early stages will wield considerable power and influence over all of human civilization. Money, labor, government and media are just a few facets of society that will change dramatically by these innovations. And it is up to us to set the technology on the right path while we still have the power to do so.

 

Source: Techopedia

The next big update from SAP

The next big update from SAP

SAP’s next big update includes salesforce, the company is planning to take on Salesforce on multiple fronts, it’s aiming to ties its back-end financial software to the front-office and redefine customer relationship management. SAP has already assembled the CRM elements and the challenging part of putting them together is next.

SAP’s next big update includes salesforce, the company is planning to take on Salesforce on multiple fronts, it’s aiming to ties its back-end financial software to the front-office and redefine customer relationship management. SAP has already assembled the CRM elements and the challenging part of putting them together is next.

SAP CEO Bill McDermott summarised a suite called SAP C/4HANA, this will incorporate the acquisitions of Hybris, Gigya and Callidus Cloud and will cover consumer data, marketing, commerce, sales and customer services.

SAP’s new acquisition, Core Systems, will add to its field services capabilities. The Swiss company uses AI and crowd sourcing to manage field service technicians. Core Systems will become part of SAP’s Service Cloud.

McDermott referenced that the company "was the last to accept the status quo of CRM and is now first to change it. That's a guarantee."

With that McDermott noted the need to revamp their “Legacy CRM systems” that revolve around Sales. SAP along with Oracle’s Siebel systems were the legacy CRM apps upended by Salesforce. Now SAP is trying to paint Salesforce into a legacy corner.

When it came to discuss CRM McDermott cited that it revolves around providing one view of the customer. By aligning and integrating SAP’s core strength with CRM the company aims to differentiate. SAP’s priority with this is machine learning.  SAP Leonardo and integration with SAP S/4HANA, its ERP suite that has about 8,300 customers. Speaking during his keynote, McDermott said.

“There is a direct correlation behind the size of the problems we solve and our existence and relevance. Our greatest validation is our customers success.”

McDermott argued that CRM has to change. "We have moved from 360 degree view of sales automation where some companies focus to 360 degree view of the actual customer," said McDermott. The idea is that the supply chain and transactional data will be connected to the customer record and commerce in any channel on top of SAP Cloud Platform.

Salesforce’s latest quarter highlights the strong demand for a relationship operating system, however Salesforce has already made big strides in becoming that and continues to acquire and grow.

SAP's C/HANA portfolio includes SAP's marketing, commerce, service and customer data and sales clouds. SAP Sales Cloud unites Hybris Cloud for Customer, Hybris Revenue Cloud and CallidusCloud. SAP has consolidated these front-office functions and cloud based CRM efforts under a customer experience management suite.

According to Ray Wang, principal of Constellation Research, SAP has a CRM installed base due to bundling the application with its ERP tools. However, customers are also buying Salesforce even if SAP is running the financials and back office. The two CRM leaders in Wang's view are Salesforce and Microsoft in terms of users. Oracle also has a large installed base.

Add it up and SAP's CRM plans may be more about keeping itself in the loop with customers and gaining enough mind share with enterprises. SAP's Service Cloud is solid, said Wang. "SAP is saying that it is not ready to cede the market to Salesforce," he said. "SAP has a base there and there are Hybris commerce customers that may look to SAP for marketing."

SAP is also looking to shift the CRM conversation from managing sales to gaining productivity and return on investment.

 

“Putting it all together will be a lot of work” Said Wang in regards to how SAP will be able to blend its various moving parts and technologies into a coherent site. “SAP will have to get a level of a common UI” he added that SAP’s Flori design language has turned out better than expected and can bridge gaps between the applications.

 

Selling SAP C4/HANA

To date, SAP's cloud strategy has been fairly straightforward: Acquire companies with installed bases and then cross-sell to bring down customer acquisition costs. Whether it's more recent purchases such as Callidus, Concur or Gigya or older ones like SuccessFactors or Ariba, SAP has mastered the cross-sell and wallet share momentum.

But if SAP is going to become a CRM player with a new customer-first e-commerce spin the company will have to branch out into playing small ball. SAP has historically been about large enterprise deals, but the software market is more direct and land and expand.

SAP acquires Gigya, plans to meld with Hybris, target omnichannel | SAP unveils its Data Hub

Enter Bertram Schulte, chief digital officer of SAP. At SapphireNow, SAP outlined plans to make SAP.com transactional. Schulte's team of 100 people has a simple mission: Simplify the buying process for customers.

Today, customers and partners buy SAP applications and the back-and-fort with contracts, procurement and fulfillment can take weeks, explained Schulte. Adding 10 more users and an extension module goes through a similar process.

SAP.com is now aiming to handle those transactions. "We are establishing the digital channel and it won't be a parallel universe to field sales, but an augmentation," he said. "There will be channel parity."

As a result, the new SAP.com should facilitate more subscriptions over time. "This is also a cultural effort. In big deal scenarios, we don't rely on scalable no touch efforts. We need to think about trial to buy and retention. It's a land and expand way to think about it."

Schulte said that SAP is farther along than it initially thought it would be when the digital initiative launched. While the digital sales efforts may not be a direct fit with C/4HANA, the plan is worth noting. If SAP is really going to challenge Salesforce in CRM it is going to have to play small ball and get some folks to try out its software on the side.

THE C/4HANA, S4/HANA PROMISE

Should SAP's C/4HANA really get traction it's likely to be with customers that have already standardized on S4/HANA ERP as a platform.

The move to launch C/4HANA also illustrates how enterprise software vendors are going for platform plays across multiple functions. If successful, SAP's efforts will rhyme with what Microsoft is doing with Microsoft 365. Think enterprise software buffet.

Microsoft and SAP both commit to running SAP HANA on Azure internally | Mining player CPM goes live with SAP Leonardo in Australia

But first, those S4/HANA standardization efforts need to pick up. To that end, Accenture announced at SapphireNow that it has rolled S/4HANA out broadly to 15,000 users. According to Dan Kirner, Accenture's deputy CIO, S/4HANA was coupled with Microsoft Azure to support diverse units, add real-time analytics and financial reporting, integrating mergers and acquisitions and being able to add SAP's new technologies onto the S/4HANA base.

Enterprises learning to love cloud lock-in too: Is it different this time?

That last point is critical if C/4HANA is going to be a big success. Accenture, a key SAP systems integration and consulting partner, runs on SAP across the company. "The whole ERP market is moving that way (to a platform)," said Kirner. "We look at SAP as an overall suite whether it's finance, SuccessFactors, Ariba or Concur."

Accenture took a year to fully roll out S/4HANA and it's among the first companies of its size to complete an implementation on S/4HANA and Azure.

Those early S/4HANA customers are going to be an initial target customer base for C/4HANA. It remains to be seen whether SAP's CRM efforts expand beyond its installed base. The company, of course, is optimistic.

"We believe C/4HANA is very differentiated and in line with what modern enterprises are thinking of today when it comes to customer experiences," said Alex Atzberger, president of SAP Hybris. "We don't believe our customers have seen SAP as a CRM choice, but we're now going all-in on CRM again."

 

Source: Zdnet

How AI is helping fight crime

How AI is helping fight crime

Artificial intelligence (AI) is being used both to monitor and prevent crimes in many countries. In fact, AI’s involvement in crime management dates back to the early 2000s. AI is used in such areas as bomb detection and deactivation, surveillance, prediction, social media scanning and interviewing suspects. However, for all the hype and hoopla around AI, there is scope for growth of its role in crime management.

Artificial intelligence (AI) is being used both to monitor and prevent crimes in many countries. In fact, AI’s involvement in crime management dates back to the early 2000s. AI is used in such areas as bomb detection and deactivation, surveillance, prediction, social media scanning and interviewing suspects. However, for all the hype and hoopla around AI, there is scope for growth of its role in crime management.

Currently, a few issues are proving problematic. AI is not uniformly engaged across countries in crime management. There is fierce debate on the ethical boundaries of AI, compelling law enforcement authorities to tread carefully. Defining the scope and boundaries of AI, which includes personal data collection, is a complex task. Problems notwithstanding, AI represents a promise of a new paradigm in crime management, and that is a strong case for pursuance. (For more on crime-fighting tech, see 4 Major Criminals Caught by Computer Technology.)

What Is the Crime Prevention Model?

The crime prevention model is about analyzing large volumes of various types of data from many different sources and deriving insights. Based on the insights, predictions can be made on various criminal activities. For example, social media provides a veritable data goldmine for analysis – though, due to privacy concerns, this is a contentious issue. It is a known fact that radicalization activities by various groups are done through social media. AI can reveal crucial insights by analyzing such data and can provide leads to law enforcement agencies.

There are also other data sources such as e-commerce websites. Amazon and eBaycan provide valuable data on the browsing and purchasing habits of suspects. This model is not new, though. Back in 2002, John Poindexter, a retired admiral of the U.S. Army, had developed a program called the Total Awareness Program which prescribed collecting data from online and offline sources. But following vehement opposition due to privacy intrusion issues, funding support to the program was stopped within a year. 

Real-Life Applications

AI is starting to be used for crime prevention in innovative ways around the globe

Bomb Detection and Deactivation

The results of deploying robots in detecting bombs have been encouraging, which has led to the military procuring robots worth $55.2 million. Over time, robots have become more sophisticated and can distinguish between a real bomb and a hoax by examining the device. According to experts, robots should soon be able to deactivate bombs.

Surveillance, Prevention and Control

In India, AI-powered drones are used to control crowds by deploying pepper spray and paintballs or by making announcements. Drones are fitted with cameras and microphones. Drones, it is believed, will soon be able to identify people with criminal records with facial recognition software and predict crimes with machine learningsoftware.

Social Media Surveillance

Social media provides the platform for executing different crimes such as drug promotion and selling, illegal prostitution and youth radicalization for terrorist activities. For example, criminals use hashtags to promote different causes to intended audiences. Law enforcement agencies in the U.S. have succeeded to an extent in tracking such crimes with the help of AI.

Instagram, for example, is used to promote drug trafficking. In 2016, New York law enforcement used AI to track down drug peddlers. AI searched for millions of direct and indirect hashtags meant to promote drugs and passed on the information to police. Similarly, to tackle radicalization of youth, law enforcement agencies are using AI to monitor conversations in social platforms.

Interviewing Suspects

An AI-powered chatbot in a university in Enschede, Netherlands is being trained to interview suspects and extract information. Expectations from the bot are to examine the suspect, ask questions and detect from the answering patterns and psychological cues whether the suspect is being truthful. The name of the bot is Brad. It is still in the beginning stages, but the development represents a new aspect in crime management.

Advantages and Disadvantages

While these futuristic advances in law enforcement have a lot of potential, one must also consider the drawbacks.

Advantages

Security needs and considerations are dynamic and complex, and you need a system that adapts quickly and efficiently. Human resources are capable, but have constraints. In this view, AI systems have the advantage of being able to scale up to do their jobs more efficiently. For example, monitoring possible criminal activities on social media, from a manual perspective, is a gargantuan task. Human approaches can be erroneous and slow. AI systems can perform this task by scaling up and performing the tasks faster.

Disadvantages

Firstly, for all the hype around, AI’s involvement in crime management is still in the nascent stage. So, cut the hype and accept that its efficiency in crime prevention or control on a larger scale is still unproven.

Second, crime prediction and prevention will require data collection, much of which could be personal data. This makes the government and law enforcement agencies vulnerable to extreme criticism from citizens and other groups. This will be interpreted as intrusion on citizens’ freedom. Data collection and snooping have been extremely contentious issues in the past, especially in democratic countries.

Third, developing AI systems that learn from unstructured data can be an extremely challenging task. Since the nature of criminal activities have been becoming more sophisticated, it might not always be helpful to provide structured data. It is going to take time for such systems to adapt.

Conclusion

Currently, there are many challenges confronting the involvement of AI systems in crime management. However, it is worth the effort to engage AI in crime prevention and control. The nature of crime and terrorist activities is evolving to become more sophisticated every day, and purely human involvement is no longer enough to tackle such problems. In this context, it may be important to note that AI will not replace human beings, but will complement them. AI systems can be fast, accurate and relentless – and it is these qualities that law enforcement agencies will want to exploit. As of right now, it seems that AI will continue to become even more prominent in law enforcement and crime prevention.

Source: Techopedia

What’s stopping the adoption of machine learning?

What’s stopping the adoption of machine learning?

The latest advances in machine learning are currently rocking the market with Artificial intelligence (AI) leading the way as the most revolutionary technology. Recent studies show that 67% of business executives look at AI as a means to automate processes and increase efficiency. Everyone is talking about AI as it looks like it is going to change our world forever.

The latest advances in machine learning are currently rocking the market with Artificial intelligence (AI) leading the way as the most revolutionary technology. Recent studies show that 67% of business executives look at AI as a means to automate processes and increase efficiency. Everyone is talking about AI as it looks like it is going to change our world forever.

General consumers believe AI to be a potential instrument to increase social equity, with 40% believing that AI will expand access to most fundamental services like medical, legal, transportation for those with lower income.  However, the adoption of AI for automation processes could be much higher, there are a few issues that are currently blocking it.

 

Lack of Organisation

Companies are made up of many organisation heads that need to make the decisions, CIO, CDO and CEO. All these officers run their own departments, which are supposed to drive their AI efforts together, at the same time and with the same level of effort, which sounds easy enough on paper, but in real life rarely happens.

Clarifying who is responsible for spearheading the machine learning project and its implementation within the company is the first step. Where several data and analytics teams need to sync up their operations, it is not unusual that they end up diluting their work on an assortment of smaller projects that although do contribute to the understanding of machine learning but end up failing to achieve the automation efficiency needed by the core business.

Insufficient training

Recent developments to deep learning algorithms has helped machine learning take a massive leap forward, though, the technology is old and new as basic AI dates back to the early 80’s. True specialists, though, are far and few between as companies like Google and Facebook scoop up 80% of machine learning engineers as they possess in depth knowledge of that field.

Many companies know their limits and no more than 20% think their own IT experts possess the skills needed to tackle AI. Demands for Machine learning skills are growing very quickly, but those who possess the necessary expertise in deep learning algorithms may lack the relevant qualifications. Because this field is still new many who are paving the way today are old-time programmers from an era where degrees in machine learning didn’t exist.

Inaccessible Data and Privacy protection

AI’s need to be fed a lot of data before they can begin to learn about anything through learning algorithms. However, most of this data is not ready for consumption, this is especially true for unstructured data.  Data aggregation processes are complex and time-consuming, especially when the data is stored separately or with a different processing system. All these steps need the full attention of a specifically dedicated team composed of a different kind of experts. (For more on data structure, see How Structured Is Your Data? Examining Structured, Unstructured and Semi-Structured Data.)

Data extraction is also often unusable whenever it contains vast amounts of sensitive or personal information. Although obfuscation or encryption of this information eventually makes it usable, additional time and resources must be devoted to these burdensome operations. To solve the problem upstream, sensitive data that needs to be anonymized must be stored separately as soon as it is collected.

 

Trust and Believability

When a deep learning algorithm cannot be explained in a simple way to a person who is not an engineer or programmer, those who may be interested in AI to harness new business or opportunities may start to dwindle. This seems to be especially true in some of the more traditional industries. Most of the time, in fact, historical data is practically non-existent, and the algorithm needs to be tested against real data to prove its efficiency. It is easy to understand how in some industries such as oil & gas drilling, a less-than-optimal result may lead to substantial (and unwanted) risks.

Many companies that still lag behind in terms of digital transformation might need to revolutionize their whole infrastructure to adopt AI in a meaningful way. Results might require a long time before they're visible, as data needs to be collected, consumed and digested before the experiment bears fruit. Launching a large-scale machine learning project with no guarantee that it is worth the investment requires a certain degree of flexibility, resources and bravery that many enterprises simply might lack.

 

Conclusion

In a curious turn of events, many of the roadblocks that still slow or stall the advancement of AI are linked to human nature and behaviours rather than to the limits of the technology itself.

There are no definite answers for those who still doubt the potential of machine learning. This is a path that has never been trodden, and field experimentation is still needed during this development phase. Once again, it is our turn to leverage one of the characteristics that helped humanity achieve its most extraordinary heights: our ability to adapt. Only this time we need to teach this skill to our intelligent machines.

The 5 most amazing AI advances in Health Care

The 5 most amazing AI advances in Health Care

Artificial intelligence is revolutionizing our world in many unimaginable ways. At the verge of the Fourth Industrial Revolution, humanity is currently witnessing the first steps made by machines in reinventing the world we live in. And while we keep debating about the potential drawbacks and benefits of substituting humans with intelligent, self-learning machines, there's one area where AI's positive impact will definitely improve the quality of our lives: the health care industry.

Artificial intelligence is revolutionizing our world in many unimaginable ways. At the verge of the Fourth Industrial Revolution, humanity is currently witnessing the first steps made by machines in reinventing the world we live in. And while we keep debating about the potential drawbacks and benefits of substituting humans with intelligent, self-learning machines, there's one area where AI's positive impact will definitely improve the quality of our lives: the health care industry.

Medical imaging

Machine learning algorithms can process unimaginable amounts of info in the blink of an eye. And they can be much more precise than humans in spotting even the smallest detail in medical imaging reports such as mammograms and CT scans.

The company Zebra Medical Vision developed a new platform called Profound, with algorithm-based analysis of all types of medical imaging reports that is able to find every sign of potential conditions such as osteoporosis, breast cancer, aortic aneurysms and many more with a 90 percent accuracy rate. And its deep learning capabilities have been trained to check for hidden symptoms of other diseases that the health care provider may not have been looking for in the first place. Other deep learning networks even earned a 100 percent accuracy score when detecting the presence of some especially lethal forms of breast cancer in biopsy slides.

 

Computer-based analysis is so much more efficient at (and less costly than) interpreting data or images than humans, that some have even argued that in the future it could become unethical not to substitute AI in some professions such as radiologists and pathologists! (For more on IT in medicine, see The Role of IT in Medical Diagnosis.)

Electronic Medical Records (EMRs)

The impact of electronic medical records (EMRs) on health information technology is one of the most controversial topics of debate of the last decade. According to some studies they represent a turning point in improving quality of care while increasing productivity and timeliness as well. However, many health care providers found them cumbersome and difficult to use, leading to substantial technology resistance and widespread inefficiency. Could the newer AI-driven software come to the rescue of the many doctors, nurses and pharmacists fumbling every day with the unwieldy clunkiness of EMRs?

One of the biggest issues with this new health care technology is that it forces clinicians to spend way too much of their precious time performing repetitive tasks. AI can easily automate them, however, for example by using speech recognition during a visit to record every detail while the physician talks with the patient. Charts can and will include much more detailed data that could be collected from a variety of sources such as wearable devices and external sensors, and the AI will feed them directly into the EMR.

But moving forward from the first step of data collection, when enough relevant info is correctly understood and extrapolated by deep learning algorithms, it can be used to help improve quality of care in a lot of ways. It can enhance patients’ adherence to treatment and reduce preventable events, or even guide doctors via predictive AI analytics in treating high-cost, life-threatening conditions. Just to name a practical example, a recent study published in the JAMA Network found how the big data extracted from EMRs and digested by an AI at the University of California, San Francisco Health helped with the treatment of potentially lethal Clostridium difficile (C. diff) infections.

And it's easy to see how much medical record data mining is going to be the next “big thing” in health care, when none other than Google launched its own Google DeepMind Health project to improve the speed, quality and equity of access to care.

Clinical Decision Support (CDS)

Another interesting example of deep learning can help machines make better decisions than their human counterparts is the proliferation of clinical decision support(CDS) tools.

These tools are usually built into the EMR system to assist clinicians in their work by suggesting the best treatment course, warn of potential dangers such as pharmacological interactions or previous conditions, and analyse even the slightest detail in a patient’s health record.

An interesting example is MatrixCare, a software house that was able to integrate Microsoft's famous AI Cortana in their tool used to manage nursing homes. The potent analysis capabilities of the machine learning engine strengthened the decision-making ability of the support tools incommensurably.

“One doctor can read a medical journal maybe twice a month,” explained CEO John Damgaard, “Cortana can read every cancer study published in history before noon and by 3 p.m. is making patient-specific recommendations on care plans and improving outcomes.”

CDS also brings forward the argument that machines are able to communicate with each other much better than humans do. In particular, different medical devices can all be connected to the internet just like any other internet of things (IoT) device (wearables, monitors, bedside sensors, etc.), and to the EMR software as well. Interoperability is a critical issue of modern health care as delivery of care fragmentation is a major cause of inappropriate treatment and increased hospitalizations. When led by smart AI, the various EMR platforms become able to “talk” to each other through the internet, increasing cooperation and collaboration between different wards and even different health care facilities.

Drug Development

Developing a new drug through clinical trials is often a very costly affair. Not just in terms of time (we're talking about decades) and dollars invested (the costs may easily reach up to several billion dollars), but human lives as well. Many new pharmaceuticals require, in fact, many years of additional testing on real-world subjects during the so-called post marketing period, and it's not so uncommon that many serious (or even deadly) side effects are discovered many years after a medication has been launched.

Once again, efficient supercomputer-fuelled AI can root out new drugs from a database of molecular structures that no human could ever dare to analyse. A prominent example is Atomwise's AI, which was able to predict two drugs that could put a stop to the Ebola virus epidemic. In less than one day, their virtual search was able to find two safe, already existing medicines that could be repurposed to fight the deadly virus. The best part is that they found a way to effectively react to a pandemic emergency just by scanning through drugs that had already been marketed to patients for years, proving their safety. (To learn more about how technology is guiding drug development, see Big Data's Influence in Medicine and Pharmaceuticals.)

A Leap into the Future

Some of the most amazing technologies are not ready yet, being nothing more than just prototypes, but their implications are so breath-taking that they're still worth mentioning.

One of these is precision medicine, a really ambitious discipline that uses deep genomics algorithms to scan through a patient's DNA looking for mutations and anomalies that could be linked to diseases such as cancer. People like Craig Venter, one of the fathers of the Human Genome Project, are currently working on a new generation of computational technologies that can predict the effects of any genetic alteration, paving the road to individualized treatments and early detection of many preventable diseases.

A Word to the Wise

As excited as we may be because of the huge potential of introducing AI to health care, it is important that we understand its limitations. Using AI in medicine is not devoid of risks, although many of them will be easily overcome once we get accustomed to it.

The maxim “do no harm” is critical to establish some ethical standards that would act as boundaries. Today we're invested in the responsibility of building the framework upon which the future generations will make their decisions.

Source - Technopedia

What the car of the future looks like

What the car of the future looks like

What does the vehicle of the future look like? How does it work? Can it make our world more efficient, safer, and ecologically sound? There is a swirling uncertainty around what lies ahead for the automotive industry, this doesn’t have to be scary, in fact this is very exciting. The potential in uncertainty that people are working to capture as they help build the car of the future.

What does the vehicle of the future look like? How does it work? Can it make our world more efficient, safer, and ecologically sound? There is a swirling uncertainty around what lies ahead for the automotive industry, this doesn’t have to be scary, in fact this is very exciting. The potential in uncertainty that people are working to capture as they help build the car of the future.

IoT in the future of car technology

Before we look to the future, we must look at where we are today. Most of us don’t yet have much contact with the Internet of Things (IoT), it is still sheltered away in closed, controlled, industrial spaces.

When it comes to everyday, most people use IoT in the form of wearable or home assistant devices. In connection with vehicles, we can see that those platforms are important, engaging and transforming the way we go about our daily lives.

That transformation begins with today’s advances in car tech such as telematics and infotainment services. Soon, automatic IoT evolves to over-the-air  updates, self-drive and vehicles interacting with the world around them.

Reaching the 5G mile marker

Right now, the car of the future is just on a practice lap, 5G will give us the green flag to speed up innovation. The millisecond latency of 5G will enable workloads to be shifted, balancing what work gets done in the car, and what gets done in the cloud. This makes access to data faster and allows us to transform onboard architecture of vehicles.

By utilizing edge computing, beamforming and network slicing, cellular network operators will be able to support roads full of self-driving vehicles. But the car of the future does not end its race once we round the turn to 5G. Once we figure out how the car of the future works, then we must decide how we want to use it.

This is what it looks like when 5G gets the green flag.

Swarm Mobility – Connected fleets and car-sharing

So, what are we riding in? Is the car of the future just a sleek, self-driving update of my current car, or is it something completely different? People used to think of cars as “horseless carriages”. The way we think of cars today may be similarly short-sighted.

As the way people travel continues to evolve, automakers are rethinking their products and relationships to customers. What this could mean for public transportation, car-sharing and the transport industry, is very interesting.

Perhaps soon, transportation systems will work more like the IoT swarm robotics that work inside smart factories and distribution centres today. In this model, when a task is assigned, the closest available robot takes the job, or teams up with others to get the job done as efficiently as possible.

If we apply this thinking to vehicles, a “swarm mobility” model could lead to easier travel options and better use of resources. That would mean more uptime per vehicle and less vehicles on the road, but more ways to connect with passengers.

With automotive IoT platforms and future car technologies like the digital car key or connected fleets, there is continual developments in ways to capture the potential of the connected car. While this is an uncertain road ahead, there is great potential for the car of the future.

 

6 questions that stop your invoice getting paid

6 questions that stop your invoice getting paid

Your invoice is more than just a document you must retain for your records or that asks clients to cough up. Your invoice is there to help you get over your final hurdle of a project by getting what you’re owed.

Your invoice is more than just a document you must retain for your records or that asks clients to cough up. Your invoice is there to help you get over your final hurdle of a project by getting what you’re owed.

A poorly designed invoice can be held over you as a reason not to pay, so you really need to anticipate any last-minute questions that could act as a barrier to getting you paid. These 6 fundamental questions should be answered to create a great invoice:

  1. Who is this payment demand from?

It’s incredibly important that the recipient of your invoice knows who it’s from. They get dozens of invoices a week and shouldn’t have to spend a great deal of time trying to figure out the most recent one is from you. Adding contact details is the bare minimum, you should take time building an invoice template that not only reflects your business but your brand.

  1. What work could this be for?

You shouldn’t assume that your client knows what you’re billing them for, they may have a handful of other projects running at the same time. Even for a straightforward project, it’s a great idea to detail exactly what you’re asking to be paid for and remove any doubt. It’s handy to include purchase order numbers, project reference numbers or the projects name on the invoice.

  1. Did it really cost that much?

Quoting for a project can bring up a lot of scary figures for the client, this is a great opportunity to remind them how much work you put in and exactly what you delivered. Try to be as descriptive with each line item as you can.

Include a “notes” section on your invoice, this space could help you remind the client about positive news that happened during the work you’ve done.

  1. What about that issue?

As mentioned early including a notes section can help you address positive news, but as well as that, it can mention unresolved issues that may keep your client from paying you straight away. A lot of projects end with a snagging list at the end. Head off any questions that may have a seemingly never ending back and forth, something like “Mondays meeting we’re agreeing to a list of final changes” this reminds the client that you’re on top of any future changes, but they need to pay you for the work you’ve already completed.

Another way to resolve issues like this, is by including your phone number so the client can contact you to conclude these problems more quickly.

  1. How long have we got before payment is due?

It isn’t pleasant having to chase payments, specify a date with clear payment terms on your invoice then that avoids any awkward questions. If they haven’t paid by the due date then it’s a lot easier to send overdue payment notices.

The most efficient way to get paid within a week of issuing your invoice is by including zero-day terms. This way you’ll be asking the client to pay immediately. If you don’t set a payment date then it becomes the legal maximum which is 30 days, after that you’re entitled to charge interest. It’s more ideal for both parties if you set out your payment terms and the interest you plan to charge on your initial contract as well as your invoices.

  1. How do I pay?

If your client is ready to pay, then you want to make the process as painless as possible. Make sure your invoice has your BACS payment details, so they can send you money directly. Another way is to include immediate payment links to financial platforms like PayPal.

If you are looking to take the next steps feel free to contact Hanson Regan for advice at Info@hansonregan.com

Is IT contracting for you?

Is IT contracting for you?

Have you been thinking about becoming an IT contractor? It has a lot of great perks but can be a little risky. So, let’s take a look and see if it’s the right decision for you.

What’s in it for your client?

There are several reasons why companies like using IT contractors.

  • They’ll be more flexible with hours, more so than permanent staff.
  • They are easier to fire and hire, they are more of a short term commitment.
  • They provide skills that in-house teams might not.
  • But mainly they save money. Without the cost of sick pay, holiday pay, redundancy and national insurance they’re saving money, even if they end up paying you more.

What’s in it for you?

Everyone will have their own reasoning, but the following reasons are the most common:

  • Being your own boss is extremely enjoyable and satisfying.
  • More money - Contractors are usually paid more than the employees they work alongside.
  • Freedom - Contractors can pick and choose when and where they work.
  • Variety – With each new contract provides a new company and this varied skillset makes for a very impressive CV.
  • Less Taxes – Contractors who take professional advice can greatly reduce the amount of tax they pay.

 

The disadvantages

Nothings perfect and if it was, everyone would be doing it.

  • Some skills are unsuitable – It could be that the employer needs a stable workforce where a customer expects to deal with the same employee every time.
  • Less security – You won’t be protected in the same way your permanently employed counterparts are.
  • Uncertainty -  There are no guarantees, if one contract ends there won’t always be another waiting for you.
  • Effort – Running your own business is a lot of paperwork, rules to obey and accounts keeping.
  • Lonely – Being on your own can be lonely and as well as that there’s no one to pay you even if you’re off sick or in need of a holiday.

 

Who makes a great contractor?

The qualities a contractor needs are different to those who are permanently employed

  • Ability to adapt – With different sites, conditions, tools and culture, you’ll need to be great at adapting to all those different conditions that each new contract will bring. Those that can’t do this will struggle, especially if it’s your first time.
  • Ability to build relationships – A great contractor needs to be able to build work relationships well and quickly.
  • Willingness to help without harsh criticism – IT contractors have a huge amount of knowledge of how things are done differently elsewhere. It can be very useful for both permanent management and employees to tap that wealth of knowledge.
  • Ability to know when your advice is wanted – Sometimes they’ll want your advice and sometimes they won’t. A good contractor will be sensitive to this.
  • Aware of potential business – Any problem the client has is potential new business for the alert contractor.
  • Will have a database of potential clients – A successful contractor will have built up a list of clients they have worked with before and have permission to contact every so often.
  • Will keep client databases up to date – A successful contractor will have their databases up to date including, phone numbers, addresses and emails. There should never be a situation where a client is desperately seeking you for work without being able to find you.
  • Great reputation – Contractors should have great rapport with previous clients and work

 

The Next steps

If contracting is still attractive to you, and you believe you can cope with the disadvantages, then the next step is to research. Don’t give up your current employment until you are sure there is a market for your skills as a contractor.

 

If you are looking to take the next steps feel free to contact Hanson Regan for advice at Info@hansonregan.com

The Future of SAP

The Future of SAP

Thinking back on past performances of 2017 will help us in preparing for new challenges that will arise in the coming future.

2017 was a strong year, with launches, news and trends. The SAP ecosystem was really booming, fuelled by revenue from it cloud business. With such a great year behind – 2018 is even better.

SAP Leonardo will be pushing growth with the company’s digital innovation system, this will be all things related to SAP’s Cloud platform. Including Internet of Things (IoT), Machine Learning, Analytics, Big Data, Design Thinking and blockchain. SAP have committed £1.7 billion to IoT over the next five years. This shows us where talent can look to remain competitive and actively bridge skillset gaps.

SAP is looking to help business innovation by investing in startups. They’ll be doing this through their SAP.IO fund with a focus on AI, machine learning and blockchain. This initiative operates a foundry that runs accelerators in partnership with Techstars, which is the largest startup accelerator program in the world, the fund independently invests in startups.

The importance of cloud services is becoming more and more prevalent as companies are adopting them. Gartner predicts that by 2021, 28% of all IT spending will be cloud based infrastructure, middleware, application and business process services.

With that we can assume that tech skills such as developers, systems analysts and data specialists will be in high demand 2018, including those for HANA. Big data and being able to automate identification of data patterns and drawing conclusions from those patterns are high demand skill sets.

 

We at Hanson Regan combine your vision with our passion for rapid precision resourcing to help you make what you do really successful.

If you’re interested in learning more about us then please email info@hansonregan.com

How artificial intelligence can deliver real value to companies

How artificial intelligence can deliver real value to companies

AI (artificial Intelligence) is finally starting to deliver real-life benefits. After decades of promises and disappointments early adopting companies are finally reaping the benefits. But how can AI deliver real value to companies?

AI (artificial Intelligence) is finally starting to deliver real-life benefits. After decades of promises and disappointments early adopting companies are finally reaping the benefits. But how can AI deliver real value to companies?

Retailers on the digital frontier rely on AI powered robotics that run their warehouses, automatically order stock and forecast electricity demands. Automakers are even now harnessing the technology in self driving cars.

Computer power is growing, algorithms and AI models are becoming more sophisticated and the world is generating once-unimaginable volumes of the fuel that powers AI Data. Billions of gigabytes of data everyday are collected by networked devices from web browsers to turbine sensors. These developments are driving the new wave of AI.

Investments into the AI developments in 2016 were between $26 billow and $39 billion which is 3 times as much investment as it was 3 years earlier. The majority of these investments into AI come from internal R&R D spending by huge digital companies like Amazon, Baidu and Google.

Much of the AI adoption is inside the tech sector and anything outside is at an early experimental stage. In a McKinsey Global Institute discussion paper, Artificial intelligence: The next digital frontier?, which includes a survey of more than 3,000 AI-aware companies around the world, we find early AI adopters tend to be closer to the digital frontier, are among the larger firms within sectors, deploy AI across the technology groups, use AI in the most core part of the value chain, adopt AI to increase revenue as well as reduce costs, and have the full support of the executive leadership. Companies that have not yet adopted AI technology at scale or in a core part of their business are unsure of a business case for AI or of the returns they can expect on an AI investment.

However, early adopters can find that companies willing to use AI across operations and within their core functions will add value to their company. In a McKinsey survey, early AI adopters that combine a strong digital capability with proactive strategies have higher profit margins and expect the performance gap with other firms to widen in the next 3 years.

Are you making these job search mistakes?

Are you making these job search mistakes?

Job hunting is exhausting and the competition is getting fierce, so, we’ve got you covered with some job search knowledge.

Job hunting is exhausting and the competition is getting fierce, so, we’ve got you covered with some job search knowledge.

  • Job boards are not your only resource

    Being in the digital age we assume that the world wide web has us completely covered. However, did you know that 80% of jobs never get posted and are only found through networking. Which means googling isn’t enough, make sure you’re also using your real-world connections. Reaching out to family, friends, former co-workers, classmates, teammates anyone could really help you.

 

  • Companies listen to their current employees

    Referrals are super important, they account for 40% of all hires, however only 7% of job applicants get an employee referral. So, if you know someone on the inside ask them for a referral, since companies will be overrun with applicants and will rely on their existing employee’s opinions. Especially since, they want to hire people who fit in with the company vibe.

 

  • They’re not even reading your CV

    Recruiters spend on average 6 seconds reviewing your CV, that means the CV won’t get you the job on it’s own but a bad one could ruin your chances almost instantly.

    So, keep it simple, clean and short (two single sided pages). Include relevant information, have a professional email and spell check! 76% of CV’s are thrown out because of unprofessional emails.

 

 

  • The competition is fierce

    On average 250 CV’s are submitted to each job listing. That’s a huge amount of people, so it’s even more important that you stand out from the crowd. You need to tailor your application to the role and make sure you really want it, otherwise the effort will be draining, and the amount of applications will be discouraging.

 

  • The process is looooong

    Job searching isn’t quick and that’s something to keep in mind when you find an opening that looks amazing. Filling a job opening takes on average 52 days, so take time customising your application and have patience.

If you’re an IT professional looking to take the next steps contact us at careers@hansonregan.com

Tips for your CV

Tips for your CV

What if you’re at a party and a friend of a friend happens to work for your dream company? What if they offer you the job of a lifetime? What if they need a CV within 24 hours? Are you prepared?

What if you’re at a party and a friend of a friend happens to work for your dream company? What if they offer you the job of a lifetime? What if they need a CV within 24 hours? Are you prepared?

Even if you’re updating your CV, creating a whole new one or preparing for that what if scenario, you’ll need these tips before you get started.

  1. Document your achievements, it’ll be a lot easier to write them down as you have them, rather than trying to remember several years down the line.
     
  2. Include awards and accolades, this will help you stand out.
     
  3.  List any professional or internal courses you’ve attended. This shows that you have an interest in learning and will continually develop yourself and your skills.
     
  4. Taking note of your positive traits and areas for improvement is great fodder for your CV.
     
  5. Ask for feedback from your boss, colleagues, suppliers and clients and include this in different sections. This will help you get see your strengths and weaknesses and help you prepare for those unexpected “What if…” moments.
     

If you’re an IT professional looking to take the next steps contact us at careers@hansonregan.com

5 Job sources you NEED to be using

5 Job sources you NEED to be using

We’ve put together some job sources that might just make a big difference in your next search. So, update your CV’s and let’s get stuck in

The time has come for a change. So, you’re looking for new employment and if job searching wasn’t hard enough, the competition is getting stiffer.

 

In this day and age having a degree and work experience just isn’t enough you need the know how as well. Lucky for you we’ve put together some job sources that might just make a big difference in your next search. So, update your CV’s and let’s get stuck in:

 

  1. LinkedIn

LinkedIn is the #1 mobile app for professional networking and is important if you want to maximise your chances of landing your dream job.

To date Linkedin has 11 million active job listings, so get cracking on that profile, keeping it updated makes you 18x more likely to appear in searches. So, create an impactful job summary, a thorough list of your skills, add reliable references and create your network. Having a well rounded network speaks to your influence in the industry, this will draw in a lot of potential employers.

  1. Xing

Similar to LinkedIn, Xing is a career orientated social networking site. Basic membership is free, however to unlock all of Xing’s features there is a membership fee.

Setting up your profile is simple and like LinkedIn you will need to make your profile stand out as you face significant competition. Add a professional photo, compile a thorough list of your skills and build your network.

  1. Personal networks

80% of jobs never get posted and are only found through networking. Make it known throughout your personal network that you are looking, you never know what could come of it.

So, even though social media is a powerful tool for connecting us worldwide, you will need to engage in some real-world activities. Start by creating a list of former coworkers, classmates etc then try reaching out to family and friends. They might have some great advice and you could learn something about them.

  1. Job boards

 

Pretty straightforward, you’ve probably used one before - a business signs up, lists available job roles and review applications as they come in. This simplifies the process as forms on company sites can be repetitive and time consuming, which makes Job boards a necessity for any job seeker.

 

Many job boards have their own specialities and ideally you should familiarise yourself with the different options. For example, The CareerBuilder is better known for offering options for applicants with degrees. Glassdoor is unique in that it offers reviews and ratings from current and previous employees. And, Indeed offers job listings with a mix of work-from home and contractual roles. Another great addition to Indeed is their new app Job spotter. Where you upload help wanted signs in return for redeemable gift cards. I mean you may as well earn some extra cash while you job hunt.

 

  1. Recruitment Agencies

Job hunting is exhausting and can be a job within itself, so recruitment agencies are there to help. Agencies work by matching jobs with the perfect candidate, so all you really need to do is put yourself in the hands of an agency.

Submit your CV and complete an application, if you meet the requirements, you’ll be invited for an interview and the rest is up to the agency.

Here at Hanson Regan we’re much more than an agency instead of just matching based on qualifications and work experience, we combine your vision with our passion. We resource exceptionally talented IT professionals, like yourself, with rapid precision to help make what you do really successful.

Additionally, Hanson Regan offer other services and if you’re an SAP professional looking for the perfect position for you, get in touch with Hanson Regan today.

 

For more information contact: info@hansonregan.com

Is cloud the future of analytics and data processing?

Is cloud the future of analytics and data processing?

With Cloud technology there are many potential opportunities and capabilities that are opening a whole new world.

With Cloud technology there are many potential opportunities and capabilities that are opening a whole new world. Cloud allows users to access application, information and data of all sorts on an online level, which creates a sharing environment and gives users access to a whole world of people and information within their cloud. Naturally cloud business intelligence and analytics tools are being adopted at a rapid rate by enterprises worldwide.

Four separate reports detail significant need for analytics to operate across their organisation including on premises and cloud for them to produce faster data driven decisions that support their growing business needs.

So, what did the reports actually say?

  1. Cloud analytics and BI (business intelligence) heighten business performance:

Dresner Advisory service saw a sharp positive increase in user sentiment and spend toward Cloud BI in 2018.

 

  1. Adoption of cloud analytics growth across organisations:

Ventana Research found that nearly half (47%) of their respondents  said cloud analytics is important to three or more departments within their organisation. A centralised analytics platform helps to bring all these sources and systems together.

  1. Embedded analytics tie together data across the enterprise:

Ventana also found that 53% of organisations prefer to buy predictive analytics embedded in applications. They also noted that embedded analytics enables business to take a self-service approach to data needs/

  1. Data science delivers enterprise value:

Reports from the Eckerson group identified 10 key data science challenges and strategies and best practices to avoid them.

To learn more, visit the SAP Analytics Cloud area of sap.com.

Who is your favourite Hot Geek in TV and Film?

Who is your favourite Hot Geek in TV and Film?

As it’s almost Valentines Day we thought we would take a light-hearted look at some of the best geeks in TV and film.

As it’s almost Valentines Day we thought we would take a light-hearted look at some of the best geeks in TV and film. Personally we’re backing Emma Stone’s awkwardness in House Bunny – but who is your favourite ‘hot geek’?



Allison Hanigan – Buffy The Vampire Slayer

The peasant skirt wearing wiccan of Sunnydale was the quintessential loveable 90’s geek – who also happened to help save the Sunnydale Hellmouth from monsters. Willow underwent a number of transformations over the shows 6 year period – culminating in her almost destroying the world – but it was always her intelligence and lack of cool that were her most adorable qualities.



Adam Brody – The OC

At the time Seth Cohen’s eclectic indie music love and inability to stop talking was the perfect way to win over hearts everywhere. The Californian was the sunny antidote to Ryan’s incessant brooding and monosyllabic narrative – not only that, but he defied all odds and got Summer to fall for him. Good work.  
 


Emma Stone – House Bunny

The ridiculous House Bunny may be awful filmic tripe – but Emma Stone’s nerdy charm is hilarious. She nails awkwardness and allure without trying – no mean feat. Sadly the film centred around geek girl turns chic for the attention of a guy nowhere nearing her league (yeah the same as She’s All That) was sexist and predictable – but hey, it’s Hollywood.



Jake Gyllenhall – The Day After Tomorrow

Oh Jake Gyllenhall, couldn’t you just drown in those deep blue eyes? Well, in The Day After Tomorrow you almost do. In one instant you’re looking straight at old blue eyes only to snap back to reality and realise that while you've been staring, floods, ice, tornadoes and an action adventure based on physical impossibilities have all taken place. The film itself is constantly confusing, especially as your attention lapses every time Jake enters onscreen and says something nerdy. 



Gillian Anderson – The X-Files

Alongside Smouldering Mulder, Dr. Dana Scully managed to display an extremely contained, acute case of scepticism even when faced with supernatural foes like invisible elephants, Tombs and even alien autopsies. The frisson between herself and Mulder had viewers tuning in religiously, but the will-they-won’t-they charade was always secondary to Scully’s natural chemistry for investigation.


 

Joseph Gordon Levitt – Inception
The once long haired 3rd Rock From The Sun adorably nerdy alien Tommy Soloman grew up! Following a spate of appearances in films like Ten Things I Hate About You where he played the lovable dweeb, he took some time away to work on indie films like Mysterious Skin before marrying his new found edge with natural geekiness. Inception was the film that brought together those two sides to his personality. And the film where we saw him cutting a fine figure in a suit.
 

Scarlet Johanson – Iron Man

Just Google her, you’ll understand.

Johnny Depp – The Ninth Gate

In Roman Polanski’s The Ninth Gate, Johnny Depp unscrupulously under-bids clients for first editions of Don Quixote before somehow becoming embroiled in a demonic devil pact. Before he gave up acting to become a full-time pirate who only featured in Tim Burton films, Johnny Depp was the coolest, most beautiful man on earth for at least a decade. Then he was usurped by trendier, younger people like Ryan Gosling. Sadly Ryan Gosling hasn’t been particularly geeky in any films yet so he hasn’t made the list.


Julia Styles – Ten Things I hate About You

She may have been a bad ass with a penchant for riot grrrl and shrew-like tendencies - but Julia Styles was never seen without a book in her hand. From The Bell Jar and Shakespeare to an Edgar Allan Poe sticker on her school folder – complete with scathing opinions on Hemingway - her literary insight was fiercely attractive. 

Happy Valentines Day...

Hanson Regan calls for support to become the National ‘Public’ Champion for UK in the European Business Awards

Hanson Regan calls for support to become the National ‘Public’ Champion for UK in the European Business Awards

Hanson Regan from the UK will today compete for the title of ‘National Public Champion’ in this year’s European Business Awards, sponsored by RSM, as the public vote opens for the first time.

Hanson Regan from the UK will today compete for the title of ‘National Public Champion’ in this year’s European Business Awards, sponsored by RSM, as the public vote opens for the first time.

The company, already named as one of the National Champions in the independently judged part of the competition, has posted a video of its company online at http://www.businessawardseurope.com/vote/detail-new/united-kingdom/17050 giving a powerful insight into the story of their business and its success.

Competing against all other country National Champions for the public vote, the company with the most votes will be named ‘National Public Champion’ for UK on 7th March 2016. The first phase of the online voting is open from 11 January to 26 February 2016

John Kelly, Director, Hanson Regan says: “With just over seven weeks for people to vote online for their favourite company, we are hoping that the public will watch our video and vote for us. The public vote means a great deal as it is both our existing and potential customers and clients giving their approval to our success.”

The second public vote will see all of the National Public Champions from 32 different countries compete against each other to become the overall European Public Champion. The voting for this takes place between 7th March and 26th April 2016, and the result will be announced at the European Business Awards Gala event in June 2016.

Adrian Tripp, CEO of the European Business Awards said: “Last year the public vote generated over 170,000 votes from across the world.  It is a very important part of the Awards as it gives these entrepreneurial companies another way of showcasing their achievements.”

He continued: “So we ask everyone wanting to support their country or business in general to take some time, watch the videos, and cast their vote.”

Separately, the independent judging panel in the European Business Awards will review and score all the video entries plus written entries and select a final top 110 Ruban d’Honneur recipients who will face the judging panels in the final stage of the competition. The overall winner from each category will then be announced at the same time as the European Public Champion at the Gala Event in June.

The European Business Awards was created to recognise and promote business success and support the development of a stronger business community throughout Europe.  Additional sponsors and partners of the Awards include ELITE, the UKTI and PR Newswire.

In the 2014/15 competition, all EU member markets were represented plus Turkey, Norway, Switzerland, Serbia and the Former Yugoslav Republic of Macedonia. Their combined revenue exceeded €1.5 trillion, and together they generated profits of over 60 billion Euros.

Uh-oh the Xmas ‘wind-down’ is here…

Uh-oh the Xmas ‘wind-down’ is here…

It’s the most wonderful time of the year; the John Lewis and Sainsbury’s Christmas adverts have resulted in people bursting into tears at Mog the Cat and speculating over why an old man has been banished to the moon...

It’s the most wonderful time of the year; the John Lewis and Sainsbury’s Christmas adverts have resulted in people bursting into tears at Mog the Cat and speculating over why an old man has been banished to the moon – while sound tracked to Aurora Aksnes’ ‘quirky’ cover of Oasis’ ‘Half the World Away’.

The cheese boards and Bucks Fizz have been filling the shelves since October and Santa’s grottos are popping up in shopping centres across the country at an alarming rate. It’s pretty easy to see why work productivity slumps to an all-time low during the festive season.

The problem is, if you’re working, this ‘down time’ is a myth. Chances are you’ve slowed down but there are things to be done which will create stress before and after the Christmas period if not complete.

To reduce the pre and post-Christmas stress – and effectively avoid ruining Christmas – here are five ways to be productive over the Christmas period.



1) Don’t burn the candle at both ends


It’s the end of the year, there are work Christmas parties, catching up with friends before they leave to visit family, cheap cava and those who have been misers all year who develop a classic case of Ghost-of-Christmas-Yet-to-Come and suddenly want to buy rounds at the pub. Nice.

With all of these temptations, it’s easy to find yourself out and about more than usual and your health and purse / wallet take as much of a beating as your liver. But don’t forget, going into work hungover and tired is ten times worse during the ‘festive’ period when everyone is happy. You don’t want to be the ashen faced scrooge in the corner filling out timesheets and smelling of stale booze. Instead, use the quiet time to get yourself organised – finish off any outstanding projects and start the New Year with nothing hanging over your head.

2) Minimise looking at email

It’s tempting to open every e-card or ‘last chance 50% off Yankee candle’ email that arrives in your inbox, but before you know it, the morning will have passed and you’ll be left wondering why you spent your last £20 on whisky stones.

Try allotting ten minutes every two to three hours to opening and checking emails. (Let’s face it, if it’s something flagged urgent they’ll give you a call and follow up anyway). At least this way you’ll get to be productive without even more interruptions.

3)  Create a top-level ‘must get done’ list

If you have lots of actions that seem unachievable – and are not helped by Cliff Richard constantly ringing in your ears - write down the most important tasks. That way, when you come back after the New Year at least some of the big projects you wanted to deliver will be done.

It also means when everyone else is playing a game of furious catch-up in January that you’ll be the one that appears organised, calm and in control. Double bonus.
 

4) Keep up with what’s happening in your industry

Of course Christmas is the perfect opportunity to unwind, but don’t forget to stay on top of what’s happening in the news – even if it’s just browsing sites the evening before you return to work. After all when you’re an ocean of serenity and everyone is freaking out, you can purposefully ask: “I take it you saw the latest development in [insert niche industry development here]?” feigning mild surprise when your question is met with hate filled looks.

5) Clean up

There’s always that one person in the office that raises an eyebrow when they pass a desk with a half empty mug of coffee on it, or heaven forbid – a pen not in its allocated pen pot. While you might have perfected the ‘selected deaf’ act to cope with them throughout the year, December is a great time to exorcise those clutter demons. Plus there’s something cathartic about shredding those pages of notes taken in meetings!

If you are hiring SAP talent – you shouldn’t be looking at CVs

If you are hiring SAP talent – you shouldn’t be looking at CVs

Director John Kelly discusses the Hanson Regan matrix. A system which gives us total certainty around what the client needs...

A controversial statement perhaps but a true one none the less. I may run a recruitment firm but as I also have an extensive background in systems implementation and quality assurance, I like to think that I’m speaking the same language as our clients!

For me, the recruitment has to start with the project – not the person spec.  What are the business objectives? What are the key deliverables? What daily tasks need to be undertaken to achieve the deliverables. What are the top three skills needed? These are all questions asked by us which helps us develop the Hanson Regan matrix.  A system which gives us total certainty around what the client needs – and allows us to identify exactly the right person.  We then reference that candidate against all the deliverables, experience and skills needed.

Not a CV in sight!

Of course, this means a sea change in attitudes.  So often we are given a traditional job spec which has none of the key information needed for us to make an informed decision on the right person. We need to use the same smart agile principles to resource a project as we do to deliver it. To me that’s just common sense. Isn’t it?

But the key to all this access to the line manager – and so often we have to fight to get that access but when we do, organisations see their time to hire reduce exponentially. And that has to add value to the bottom line. In short we are a project enabler!

The traditional IT contract recruitment process is totally flawed – we have developed Precise SAP recruitment - and it works. Precious time saved – our clients’ bonuses protected – what’s not to like?

Five Apps that can help with your health

Five Apps that can help with your health

Smartphones have come a long way, and the App market has quickly become an innovator for some really clever functionality.


 

Smartphones have come a long way, and the App market has quickly become an innovator for some really clever functionality.

We take a look at five of the best apps that claim to help with your health.

ADHD Tracker

ADHD Tracker is a clever little device that uses the Vanderbilt Scales published by the American Academy of Pediatrics. It’s an app that allows parents and teachers to submit a behavioural assessment for children aged 4-18 who have been treated for ADHD.

Sleep As Android

Originally intended to wake up sleepers gently at the best possible time of the morning when your sleep cycle was at the best time to get up, the app’s evolutionary process has come a long way. The app will alert you if you’re running on sleep deficit and even monitors you sleep talking and snoring. It even tacks illnesses like sleep apnea. The one downside is it’s only for android users – although the iPhone has its own version.

Fitstar Personal Trainer

This app offers users fitness videos alongside the number of calories its users burn during a workout. The information is then passed on to Apple’s Health app via Healthkit.

iHeadache

If you suffer from a migrane this could be the app for you. Users are able to ‘start a headache’ by inputting things like symptoms, duration and severity, impact on daily life, medication used and triggers. The comprehensive list means that once you’ve recorded the details you can generate reports that classify them by types. Very handy.

DreamLab

A new app developed by Vodafone Australia and the Garvan Institute of Medical Research is hoping to cure Alzheimer’s and cancer. When a smartphone is charging, the app automatically downloads genetic sequencing profiles. This is then processed using the phones CPU and sent back to the institute to be used in cancer research.

 

The mobile sharing economy is here...

The mobile sharing economy is here...

Mobile phones sit at the heart of the uber-connected, digital ecosystem...

I was collating the monthly marketing report when I clocked an interesting stat: access to our site via mobile was up 40% vs last month. 

It’s hardly surprising. Yes it's a jump, but MoM there's a steady increase in people accessing our site via mobile. We also know that mobile phones sit at the heart of the uber-connected, digital ecosystem and our continued love affair with our mobiles have disintermediated old ways of connecting. 

Does this make us attention deficit digital narcissists? Probably. Literature has been produced in its droves about the rise of traditional jobs having to restructure or cease, simply because they‘re too slow to adapt to the changing landscape.

From raw mail deliveries losing out to ‘out of hours’ trackable parcel deliveries thanks to ebay and Amazon Prime – to traditional black cabs getting side-lined by cheaper services like Uber and its fast API use - there is very much an attitude of ‘adapt or get left behind.’

But mobile technologies are not the cultural inertia that some companies would have us believe. They are now being seen as agile marketplaces which enable us to use available technologies all from one small, rectangular-shaped source. Mobile phones have granted access to open marketplaces where you can bid for website designers, or encourage a community to crowdfund your project or album via a platform like Bandcamp. These ‘Megatrends’ are rewriting the traditional business paradigm and the ways in which collaborative consumption is being promoted as the new, efficient means of what was ‘ownership’. 

The rise in digital has made it easier than ever for people to quickly find experts to share their skill sets. Connecting with people full stop has never been simpler. What consumer’s value now is community-based innovation which has seen the growth in ‘soft assets.’ Take for example peer-to-peer innovations like Tinder, or less nefarious - depending on your experience - business models like AirBnB which is based on an online rating system.

Back in 2013 the collaborative economy was being cited as the third major development in the digital economy. It was heavily linked with social media as an ecommerce feedback platform, perfect for discussing brands in an open arena. In only two years it seems we are making huge gains in this area. 

Long-term, companies will look to join the connected, collaborative economy. With smartphones overtaking sales of PCs in the last five years, and with mobility being a clear gateway to the web, integration and collaboration with a single swipe or press of a smartphone is the future. 

If you haven't already - it's time to jump on the smartphone bandwaggon and start thinking about how it can innovate your business. 

Old Skool Recruitment v New Skool

Old Skool Recruitment v New Skool

It’s hard to imagine a time before email, smartphones and Google. But there was a time that recruitment existed without these modern aids - that time was the 90s...

It’s hard to imagine a time before email, smartphones and Google. While the internet can be blamed for plenty of things – Candy Crush, anyone? the role of a recruiter - in theory - should be made simpler with the much-vaunted democratisation of information at the press of a button. 


But is it really? Let’s take a look back at the job role of a recruiter, before the internet… let’s go waaay back, all the way to the 90s. 

Old Skool Telephones


It might seem strange to consider recruiters doing their job without the aid of a mobile phone. After all, it is expected that by 2017 75% of mobile phones will be smart phones and catching candidates via text, email and calls is an integral part of the job.  

But this wasn’t always the way. Back in the 80s and 90s recruiters spent the days glued to their desks chatting down a phone which had a cord attached to it. Rather than Google searches, clients were located with the use of a phone book, their desk was most likely filled by long lists of applicants to call too. 

In order to speak to candidates and clients, recruiters had to call them at their homes after work hours. Often they would only be able to speak to one in ten people. 

Old Skool Advertising

Internal recruitment was a long-winded process before websites and LinkedIn existed. One of the most effective means of advertising was taking out physical ads in papers as a means of attracting talent. While you could hit every household in reach of a local paper there wasn’t anything like Google Analytics available to chart how effective this was. Sometimes the newspaper would go straight into the household bin. Classic post and pray. 

Still, newspaper campaigns were tailored to any budget and were relatively inexpensive as oppose to adverts on numerous online job boards. 

 

Contracts

Instead of email, contracts would be faxed over and there would be a mad rush to retrieve them from the company fax machine. There was no Google trends or gamification either so this would often be a waiting game for the prehistoric recruiter. 
While this may sound antiquated, before fax machines – contracts would actually be sent via post. This would take a minimum of a day, something unheard of in today’s world of instant gratification. 

Interviews

Before the ease of Skype where ‘face to face’ interviews can be conducted from the comfort of your own home, recruiters would be expected to get up early to meet candidates before and after working hours in addition to weekends. 

While this still happens, there is at least the option of conducting interviews from the comfort of your sofa. 

It’s fair to say that the growth and proliferation of technology has changed the way traditional recruitment has been done, but many of the elements remain the same. It should be noted that around 10% of hires still come from traditional channels, which is three times more popular than social media. While efforts are being made to attract talent from online sources, traditional recruitment methods still require attention and recognition as a potential source of hire.

Happy Halloween: The Technological Fear Myth

Happy Halloween: The Technological Fear Myth

In the lead-up to Halloween, we take a look back at five of the most notable, technological events – many in recent years - that gave us cause for concern.

Prior to his death in 399 BC Socrates famously warned against writing because it would "create forgetfulness in the learners' souls, because they will not use their memories." This Socratic warning has been bemoaned by people since, with increasing concern that the introduction of new technology is something to be suspicious of.

From the French statesman Malesherbes railing against the printed page socially isolating readers to Nicholas Carr’s much cited article Is Google Making Us Stupid? there have been countless iterations of how technology is affecting the mind and brain – most of which is nonsense.

But we’re human and we love a good worst-case scenario story, so, in the lead-up to Halloween, we take a look back at five of the most notable events – many in recent years - that gave us cause for concern.

Hopefully being more informed will help us to avoid the inevitable robot revolution or technological apocalypse.

L’Arrivee d’un train en gare de La Ciotat

Despite only being 50 seconds in length, Louis Lumière's Arrival of the Train which gave birth to the documentary film back in 1895 caused panic and terror when it was first seen by an audience.

The train appeared to hurtle towards the audience and they feared it could fly off the screen and crash into them. Something similar could be said of Hideo Nakata's 1998 film The Ring when Sadako Yamamura crawls out of the television set meshing tradition with modernity in a chilling way. 

The War of the Worlds – Radio Broadcast

Perhaps the most notorious event in American broadcasting history, Orson WellesWar of the Worlds broadcast in 1938 where the Mercury Theatre on the Air enacted a Martian invasion of Earth, convincing more than a million American’s in the process that the Earth was indeed under attack.

Following a damaged newspaper industry after the depression and the rise of radio, newspapers sought to discredit radio as a means of communication. On Sunday evening, October 3, at 8. PM, the announcement was made: “The Columbia Broadcasting System and its affiliated stations present Orson Welles and the Mercury Theater on the air in ‘War of the Worlds’ by H.G. Wells.”

Millions of American’s tuned in on time and were made aware of the story, but for those who tuned in ten minutes late, the Martian invasion was well underway.  

Around a million listeners believed this to be a true invasion and panic broke out across the country and people took to the streets. News soon reached the CBS studio and Welles had to remind the public on air that it was fictitious. While no laws were broken networks were more cautious from that day.  

The Y2K Millennium Bug Scare

The fear of computer malfunctions were government backed when in 1999 British newspapers warned what to do about fax machines, VCR’s and answering machines hurtling towards a Y2K disaster. The Millennium bug was all to do with the limitations of clocks inside computers seeing 00 and understanding that to mean 1900 as oppose to 2000.

While this meant bodies like Action 2000 were set up to make sure machines were ‘year 2000 compliant’, literature produced at the time still had to comfort the public that their lawn mowers, hedge trimmers and bbq’s would not suddenly become sentient and attack. The guide stated that "the very worst that can happen is that some [computers] may get confused over the date". This attempt to dispel myths was for the most part listened to, although it faced hyperbole from the press and government officials like Margaret Beckett - then the minister in charge of millennium preparations - who said the bug had the capacity to "wreak havoc".

In one instance a woman moved her family to a remote house in Scotland to live off water from a well convinced it was the end of the world. It wasn’t.

Google became Skynet

Back in 2014 Google acquired DeepMind for around $500 million, the British Artificial Intelligence company specialising in machine learning, advanced algorithms and systems neuroscience.

Part of the acquisition deal saw Google establish an artificial intelligence ethics board to ensure the technology wasn’t abused. Combine this with the series of robotics firms purchased in 2013 – including Boston Robotics - and speculation was rife that Google had become monolith Skynet from the Terminator films.


The Mayan Calendar

Back in 2012, the conclusion of the 5,125-year ‘Long Count’ Mayan calendar saw panic purchases of candles and other survival essentials. This was in anticipation of an Armageddon supposedly brought about by the mythical planet Nibiru (or planet X), a comet crashing into earth or a giant solar storm annihilating the planet.

While believers worldwide ran to buy supplies the world didn’t end despite reports of the ancient Mayan prophecy. Given the planet has existed just fine for 4 billion years the majority of people overlooked the predicted doomsday, still that didn’t stop the minority few and the media of reporting the internet hoax of planet X.

Five Skills IT Contractors Need

Five Skills IT Contractors Need

Contractors are required to meet the need of growing digital demand. We’ve highlighted just some of the core skills that IT contractors should add to their arsenal to stand out against the competition.

No matter which sector you operate in technology is becoming increasingly important. To keep-up-to-date, contractors are required to meet the need of growing digital demand. We’ve highlighted just some of the core skills that IT contractors should add to their arsenal to stand out against the competition. 

Master Multiple Mobile Platforms 

Mobile has positioned itself as one of the most desirable skills to have for contractors in the IT market. The time for struggling with application development has passed, as the mobile device market continues its steady ascent. Mobile now leads the way in how we do business, consume and live our lives. To move up the ranks learn Apple’s programming language Swift in addition to staples like Java, C++, C, Objective-C. 

Free Skills Upgrade

While plenty of roll-outs in the next few years will require the most up-to-date skills, there are still a number of installations using earlier versions of SQL, Exchange and Oracle - amongst many others. Plan your work accordingly, don’t be out-of-pocket for the sake of a roll-out which has been put on hold. Make sure to add value to commodity skills which can be done for free. If you’re a .net language developer, teach yourself LINQ, Microsoft’s own resources. Similarly, Java experts can learn Struts or Hibernate online for free.  

Security is Big Business

There is a significant increase in the number of cyber security contracts becoming commonplace, making Security a high commodity in the IT skillset. As well as one of the fastest growing and highest paid areas in IT the demand is only going to increase as time goes on.  While hacking and cyber-attacks continue to dominate the news, the need for Security at an international top level will become more important to ward off threats.  


.Net is still Microsoft heavy

Designed with a Microsoft framework in mind the .Net software framework is as crucial as ever. With 80%+ of the market share Microsoft are still leading the way over Apple’s Mac OS X operating system. That’s why writing code for machines running Windows is still pivotal for contractors.  

ERP – SAP
SAP roles are in increasing demand, with the introduction of HANA and more projects being rolled out, project skills, development and testers are critical. As major companies invest in ERP there are numerous contract and permanent roles globally. 
 

Five Ways We Help Our Recruiters Recruit

Five Ways We Help Our Recruiters Recruit

Talent Manager Katie Matthews discusses five ways we help our recruiters recruit...

We know that recruiting is tough sometimes, that's why at Hanson Regan we fully support you every step of the way. Here are just five ways we help our recruiters recruit...


Hanson Regan included in the Top 500

Hanson Regan included in the Top 500

Listing the details of the UK's largest 500 recruitment companies, The Top 500 is a unique and invaluable insight into the major companies in the UK recruitment industry...

Listing the details of the UK's largest 500 recruitment companies, The Top 500 is a unique and invaluable insight into the major companies in the UK recruitment industry.

The RI Top 500 Report is recognised as being the 'bible' for our industry, focusing on the overall UK recruitment industry as well as a snapshot on the global position for recruiters.

Hanson Regan are delighted to have placed at number 325 and this is the second year in a row we have been included in the awards. 

 

Hanson Regan Named National Champion in the European Business Awards 2015/16

Hanson Regan Named National Champion in the European Business Awards 2015/16

We've been named as a National Champion for the UK in The European Business Awards sponsored by RSM; a prestigious competition supported by businesses leaders, academics, media and political representatives from across Europe.

The European Business Awards now in its 9th year engaged with over 32,000 business from 33 European countries this year and 678 companies from across Europe have been named today as National Champions; going through to the second phase of the competition.

Hanson Regan are recruitment experts in cloud ERP, SAP IT and associated technologies. SAP is at the centre of today’s technological revolution and we improve businesses by placing SAP experts globally.

John Kelly, Director, said: “We’re very proud to be selected to represent the UK as a National Champion. The European Business Awards is widely recognised as the showcase for Europe’s most dynamic companies and we are now looking forward to the next round of the judging process where we can explain in more depth how we are achieving business success.”

Adrian Tripp, CEO of the European Business Awards said: “Congratulations to Hanson Regan and all the companies that have been selected to represent their country as National Champions, they play an important part in creating a stronger business community.”

The next round requires the National Champions to make a presentation video, telling their unique story and explaining their business success. The judges will view all of the National Champions’ videos, and award the best of this group the coveted ‘Ruban d’Honneur’ status.  Ruban d’Honneur recipients will then go on to be part of the grand final in 2016.

Separately, the National Champion videos will be made public on the European Business Awards website www.businessawardseurope.com as part of a two stage public vote, which will decide the ‘National Public Champions’ for each country.  Last year over 170,000 votes were cast as companies from across Europe were publicly supported by their clients, staff and peers, as well as the general public.

Supported since their inception by lead sponsor and promoter RSM, the seventh largest audit, tax and advisory network worldwide with a major presence across Europe, the European Business Awards was up to support the development of a stronger and more successful business community throughout Europe.

In the 2014/15 competition, all EU member markets were represented plus Turkey, Norway, Switzerland, Serbia, Croatia and the Former Yugoslav Republic of Macedonia, Their combined revenue exceeded €1.5 trillion, and together they generated profits of over 60 billion Euros and employ over 2.5 million people.

For further information about the winners, the European Business Awards and RSM please go to www.businessawardseurope.com or www.rsmi.com

For further press information, please contact:

Faye Lewis, Marketing Manager at Hanson Regan on +44 0208 490 4656

EBA: The Newsroom at the European Business Awards on +44 (0) 796 6666 657

Or email vanessa.wood@businessawardseurope.com

SAP Events

SAP Events

Hanson Regan curate the best SAP events until the end of the year. Which ones will you be attending? 

We know the SAP event calendar is busy year round and that is why this Slidedeck is a must see for SAP experts. 

While this is by no means a comprehensive list of all the events, conferences, webinars and awards ceremonies we've handpicked a few of our favourites. 

We're also silver sponsors of the UKISUG15 conference taking place at the ICC in Birmingham, 22-24 November

Come and say hello!

The 007 approach to recruitment

The 007 approach to recruitment

The hiring of spies requires the same core elements at the cornerstone of any professional recruitment agency.

On the surface, the term ‘recruitment’ may not leap out of the pages of an Ian Fleming novel. Recruiters are (thankfully) not granted a license to kill and agents are unlikely to risk death at the hands of an enemy as the result of a legal system.

But on closer inspection, the hiring of spies requires the same core elements at the cornerstone of any professional recruitment agency. Good information and successful and careful planning.

An OSS guide to recruiting secret service officers considers the different types of spies needed – whether insiders, specialists, couriers, potential agents – it concentrates on numerous skill sets. Putting quality first is paramount – understanding a budding agent before approaching them is important – getting a sense of their interests, character, weaknesses, politics and what they’re about is key to ensuring a candidate is the right fit for the role you are trying to fill.

James Bond for example, has over the course of his time, slept with an estimated 55 women, killed 370 people and drinks on average 92 units of booze a week (four times the recommended weekly allowance) – this would set off alarm bells for most recruiters and hardly makes him the ideal candidate for working in most jobs.

In fiction, having a booze hound with a penchant for the female sex is fine, but in reality, Bond would probably jeopardise the reputation of any organisation he was representing.  

Out of the realms of fiction, many secret service officers would probably admit to joining with the intention of recruiting spies to gain access to secrets and conduct covert action. It’s a job description that has remained largely unchanged since the 40’s – in today’s world however, recruiters in all fields are acting as spies to some degree, and they need to take a meticulous approach to sourcing people into job roles. 

When it comes to advertising client roles online in the public sphere, clandestine precautions need to be taken by recruitment agencies. Often, covering your tracks so as not to ‘warm up the talent pool’ when your sales team is unable to cope with the influx of leads is a necessity.

Similarly, withholding salary information or specific locations that would enable competitors to work out who your client is are crucial to retaining autonomous contracts. While some people may say this was a paranoid approach, it could be the difference between making a placement and losing a candidate.  

Recruiters today are expected to fulfil a number of responsibilities and at the heart of it, is understanding human motivations for influencing people. This means quickly and accurately knowing a candidates motivation for change (money, career progression etc), why they want to find or leave a job, a realistic time frame in which to be placed and their preferred working location – and any number of other factors. 

In addition, the recruitment world is all immersive, working as an agent requires plunging headlong into attracting top talent, enthralled in head-to-head competition with competitors all trying to source that elusive ‘perfect’ candidate before you. Some people thrive off this environment and are neither shaken nor stirred if they lose a placement. Others drop off and change careers finding the all or nothing approach too much to contend with long term.

While both industries cite money as a primary motivator for a reason to go into the industry, the ups and downs of recruitment does require an unbreakable spirit at times – however, as with Bond, an agent with a commitment to achieving a goal is a powerful weapon.

Thinking of becoming a SAP contractor?

Thinking of becoming a SAP contractor?

For a lot of people, the term 'contract employment' can be tantamount to a mini panic attack.

For a lot of people, the term 'contract employment' can be tantamount to a mini panic attack. Traditionally, contract work was perceived as being forced into redundancy - or at the very least it created uncertainty over long-term employment.

However, as working focus shifts and people strive to achieve personal objectives alongside feeling successful in their jobs - two key factors to both business and individual success - contract work is now becoming the preferred working style for many. It's also an option that can be visited no matter what stage of your career you are at.    

The question: “Do I work as a contractor or in permanent employment?” is always valid and there are numerous pros and cons for entering into the world of contract employment. Frequently the question can form the basis of a tax strategy, or it could be autonomy – do you really want to work for someone else?

In IT, contract employment is rife – not just due to the nature of the work but also the amount of money which can be made from rejecting the permanent employee model. And SAP is an area in which contractors can thrive. 

The overriding factor to remain permanent is the issue of job and income security – but in the world of SAP there is growing potential for contracts to become long-term. This is especially true on large scale implementation projects where your contract lasts for a number of years. SAP particularly is an industry in which contract extensions are common place. 

One of the major draws of contract work is that traditionally IT contractors are paid much more than permanent employees. This is partly due to compensation for the uncertainty of being independent and not receiving holiday or sick pay. However it can also offer a flexibility that isn’t prevalent with permanent employment.

SAP is ideal for contractors – due to the nature of work in which an entire system needs to be implemented. Throughout this time there is an all-hands-on-deck approach until the assignment is complete. Once this happens however, the SAP professional headcount drops off as their skills and expertise have been utilised. This is why contract work is so appealing for businesses who opt for contractors who are reticent to spend money hiring and investing in permanent staff – only for them to leave once a project is complete. 

For those who don’t like company hierarchy – contract work affords financial rewards based on technical skill alone, eliminating the need to spend years in a company ascending the management food chain to earn more money and advance career prospects.  

Other benefits include the opportunity to travel – this is common across SAP and IT, but also other fields such as Logistics or Purchasing and Procurement, where contract work can take you as far (or as close to home) as you wish. Large management consultancies are based all over the world with people operating across Asia, Europe, America and Africa. 

Retaining the autonomy of acting as your own boss as an independent contractor means you have the flexibility of managing your time – within reason – and so can take time off or go on holidays. Many contractors who have holiday plans can also opt for shorter-term assignments as a ‘fill in’ for a shorter duration. 

If you’re looking to advance your career, or take it in a different direction with a great potential to earn, asking that all important question “do I work as a contractor or in permanent employment?” should always be at the front of your mind.  

Top films for tech geeks

Top films for tech geeks

Hanson Regan curate the best films for tech geeks. 

Hanson Regan has polled our collective results and the loser is... Star Wars, convincingly ousted by Metropolis, The Matrix, Star Trek and even The Social Network. Ouch.

Click HERE to see who else made the cut. 

Keep calm and carry on: how to handle a project that de-rails

Keep calm and carry on: how to handle a project that de-rails

How to deal with short-term crises, calmly.

At Hanson Regan, building relationships and alliances with partners and suppliers is at the heart of what we do. 

We know that clients and candidates expect high-level, consistent service and this is something we strive to offer all of the time. However, we understand that projects occasionally de-rail and there are sometimes complications which take place – while this is, thankfully, a rare occurrence, it is something that comes with the job and knowing how to handle small crises' is critical. What matters is how as a business, you react.  

As Global SAP specialists, our candidates engage with us in two ways. The first, is when they’re reaching the end of a fixed-term contract and are looking for their next opportunity. The second is when they are in a situation which requires our immediate attention and help. 

Many SAP IT companies are not on hand to deal with this second scenario, but all that matters in this instance is support. We know that when we reach out in crisis situations, our relationships, collaborations and trust strengthen as a result. 

To do this, we need to know as much about the company as possible. Prior to any contractor starting, we always ensure we know what the wider business objectives are and the context in which our candidate will be operating. We also need to understand the company difficulties and challenges, what pressures are impacting the business and how will the market evolve? 

Having this knowledge from the very start builds confidence with senior decision makers who in turn, trust that we understand the industry and the challenges they face. In SAP, emphasis is on SAP solutions and modules as well as an in-depth understanding of vertical markets. 

Here are just some of the ways we develop these business relationships: 

The key start to a projects lifespan is to get up and running as quickly and effortlessly as possible. It is crucial in these first few weeks to get the project running to deliver the ROI – and, if a situation arises that will prevent this from happening, there is an even greater demand for quick resolution. 
Knowing the customer and market inside out and having a consultative, integrated approach to building the relationship with the client helps both long-term growth necessary to strategic change and also short term crises that can arise.

IT strategy is often at the hands of monetary constraints and this means that strategy towards partners and suppliers can vary. One thing that will remain the same is that IT decision makers will always opt for suppliers who provide a transparent and efficient service. This is especially true with crisis resolution – time held up in procurement is wasted and expensive. 

Maintaining open and transparent relationships are key for the duration of a contract. To do this, having a clear and concise contract that outlines deliverables, a dedicated accounts team who pay on time and a support network that embraces technology is key in offering smooth operations.

Both vendors and candidates will prefer a model based around transparency and openness and even the most complicated of issues require this approach. In the long-term this will cement working relationships.

We have found that taking a customer-centric approach that means suppliers can deliver focused, transparent solutions while vendors are fully equipped to react to short-term crises. This has the benefit over delivering a sustained, successful, long-term deployment.

Five Technological Flops

Five Technological Flops

Innovation grows out of experimentation. But there are only so many times you can reinvent the wheel before things go very­ wrong...

We all know the saying that innovation grows out of experimentation. But there are only so many times you can reinvent the wheel before things go very wrong.
In a constantly evolving world - where tech reigns supreme - new designs, features and innovative ideas created to hook potential customers has seen rise to some excellent technology. It has also seen the painful demise of some truly terrible technology.

Here are five technological flops over the last few years.

Kyocera’s Echo

Generally considered ‘the worst smartphone ever’ Kyocera’s Echo – originally described as both ‘magical’ and ‘exciting’ – as if it was marketed by the team behind the Harry Potter franchise arrived and was utterly underwhelming. The concept was good – a dual-screen in theory was an ingenious way to switch between tasks. But unfortunately a terrible battery life, phone freezes and a liturgy of problems for users meant that this phone died a death. If that wasn’t bad enough, nobody mourned its demise. Ouch.

iTOi iPad Periscope

Unless you’re on a submarine you don’t have any use for a periscope. Which is why iTOi iPad’s Periscope was a confusing concept – who actually wants to be further away from a camera when engaged in a webinar or skype chat? For a mere $150 you can suffer the indignity of having a huge periscope planted on your desk next to your ‘in and out’ tray leading your colleagues to believe you have some sort of U-571 fetish. Worse still, the monstrosity offers up nothing more than making the front-facing camera on your iPad more flattering… ­

Google Glass

Google Glass will have its champions and we acknowledge that there are real benefits to the glasses. For example, in the medical industries, ambulance drivers are privy to set locations, nearby colleagues and reduced reaction times to accidents and emergencies. Clearly a good thing. But that’s not why we hate Google Glass. We hate it because in a Metaverse, dystopian Neal Stephenson sort-of-way, Google Glass is creepy. You can take photos of people, access personal information and you look like an idiot wearing them. Get a real life.

ZD Electric Car

There are arguments for electric cars and we're all for green power, but some of these models are truly dreadful. Cue ZD: The smart car came and was a good starting point for further developments – except a lot of what has followed has actually been er… worse. Failing to fire on any cylinder, the ZD is one of the more substandard variants of the electric car model.

R-Zone

Back in 1995 waaaaay before Google Glass, R-Zone attempted to give the virtual reality illusion. Sadly, its cutting-edge functionality went as far as shining a light through a transparent LCD cartridge and then projecting the action in eye on a ‘screen’. This was a terrible, terrible hand-held and a knock-off of Virtual Boy. All it achieved, was making 90’s kids feel miserable and more desperate to get a Gameboy.

Small Businesses and HR

Small Businesses and HR

The core principle of the concept of human capital management is that your employees are assets to the business, an entity whose value can be measured in a tangible way…

HR: The Things Start-up Founders Need to Know

The core principle of the concept of human capital management is that your employees are assets to the business, an entity whose value can be measured in a tangible way. Crucially, the future value of your employees to the business can be enhanced through the growth, investment in and nurturing of them. People are undoubtedly your greatest asset and any organisation, small or large, that supports this core approach to human capital management will by proxy provide employees with clearly defined and communicated performance expectations, something that is aided by investing in a sound piece of HR software to ease the pressure of managing your staff.

Ask any start-up founder, or any business leader for that matter, what represents their greatest corporate challenge and it’s highly likely that managing human resources comes pretty high up on the list. For new ventures in particular, where leaders are unlikely to have any real experience of issues relating to HR, a little bit of preparation can be the difference between creating a culture of real success, or getting bogged down with problems related to your people when small mistakes could have a disastrous effect on your overall business plan.

Simply recognising HR challenges when they arise isn't good enough; it's important to understand the issues at hand and prioritise them: there are just too many to tackle alone. Time is your second greatest asset in the business and it's crucial that you value it that way and know how to spend it, so finding a way to effectively time manage your staff is important.­Remember that before you've even hired anyone you need to work out how to raise the capital to employ people in the first place, so that's one obstacle to get over before you've even begun.

There are several strategic HR issues that you should put at the very core of your start-up's strategy.

Creating the right cultural fit

To build a truly successful business, you need to understand the needs, interests and most importantly personas of your customers. Key sections of your client-facing staff such as marketing, customer support, sales and account management professionals should be recruited to align as perfectly as possible with your customer base. It's important therefore to hire from a rich and diverse background of workers, so that the multi-faceted nature of client management and communication can be covered effectively. A good corporate culture should value debate, differing voices and perspectives. Most importantly, ensure that you're getting everyone round the table and talking consistently, to iron out any creases and ensure that you can all reach a successful compromise over complex issues.

Transparency

As a business owner you want to keep your staff informed about the direction in which things are heading, allow members of staff to gain access to training materials as and when they require and also to access the information that will help them to work effectively. This is where HR software comes into play. Many HR system providers offer simple cloud platforms that all staff can have access to for the above criteria; the key is being able to combine the centralising of information, organisation of key documents and above all improve performance.

The timing of when you share certain pieces of information with staff is also important, as by sharing it too early or too late you can undermine confidence, breed over-expectation or even convey information that isn't entirely accurate. Employees want clarity from their leaders, so don't share simply for the sake of sharing. In order to properly leverage transparency from an HR perspective you need to ensure that employees have the right amount of the right information at the right time, so that they may use it to improve both themselves and your organisation. Instil a culture that asks questions When employees have the skills, knowledge and training required to make decisions on their own, it removes a great deal of burden from the input you needed from you into HR management.

You need to instil a culture whereby employees question why they are doing what they are doing; why they need this particular piece of training, why the company is headed where it is, why they are employed there, what it is they bring to the organisation and how they are contributing towards its growth. If you can build a business that can explain itself and the future that it wants to create, with in depth analysis of plans and the roadmaps of the individuals within it, then it can truly take on the substance for success.

Generation Y are coming – are you ready?

Generation Y are coming – are you ready?

The workplace as we know it is in flux. The Millenial generation – those born after 1980 – are slowly infiltrating the world of work and they exhibit a different set of professional values than those of their generation X successors…

The workplace as we know it is in flux. The Millenial generation – those born after 1980 – are slowly infiltrating the world of work and they exhibit a different set of professional values than those of their generation X successors. While the last decade has seen the millenials trickle into the workforce as first jobbers or graduates and work up to executive level, it is only now that they are assuming managerial levels of work.

By 2020 it’s estimated that 40% of the total working population will be millennials. This means that understanding what drives them and what they want is key for retention and employee satisfaction. There are a number of myths surrounding generation Y and what they expect from their careers. Firstly, the myth that they reject the corporate world – this simply isn’t true. Young people will weigh up their options, take an active interest in launching a start-up - or their own company - if there isn’t viable employment opportunity elsewhere. That’s because millenials want a job that will accommodate their values. While they are confident and strive for success, they are much more likely to express themselves and be happy in an environment which allows them to feel comfortable rather than where they ascribe to protocol.

The defining factor here is that millenials are a relationship-orientated generation that need mutual respect, if this is given they will remain loyal to the company. The opposite applies if they do not feel valued, growing up during a number of economic crisis’ means they have had the chance to think about what matters. Staying in a job which they are unhappy in – denoted here as not feeling fulfilled or unable to reach their potential - means they will leave.

Another key factor is a huge need for a work-life integration with the majority of young people wanting flexible working hours to be able to fit in their outside work duties. Whether that is fitting in their children or personal pursuits. The most overriding characteristic is that generation Y want to do something which has meaning, where they feel they are making a difference. Understanding and accommodating these requirements are crucial for any business moving forwards. If companies want loyalty they need to allow for leadership and growth opportunities within their company allowing personal values.

Stop Feeling Overworked!

Stop Feeling Overworked!

When it comes to feeling stressed, humans are funny old things. We have accepted that since clocks were introduced to monitor labour in the 18th century that time equals money. Of course this is perception…

When it comes to feeling stressed, humans are funny old things. We have accepted that since clocks were introduced to monitor labour in the 18th century that time equals money. Of course this is perception – and what is more baffling is that we know this leads to an unhealthy, dogged reaction of feeling the need to be perceived as busy all of the time.

This fear of time wasting and the creation of human urgency does little to actually encourage productive working. We also know that this attitude shapes feelings of stress, unhappiness, tiredness and an assault on the mindset of free will.

Quite the paradox isn’t it?

Three centuries down the line and apparently some things never change. Thanks to the creation of the internet we’re available – around the clock – all of the time. Contactable by email, mobile, WhatsApp, FaceTime, Skype, Snapchat (and a plethora of other means).

Of course this isn’t a natural state and it certainly isn’t necessary – in fact, when you live in this constant state of panic, the harder it is to achieve even the simplest of tasks. The solution lies in preserving your right to ‘down time’ and working in a way that suits you.

If you work in an office where you’re scared of what people will think if you leave on time - yours is an office where very little is actually done. Sadly this environment breeds stress and unhappiness – so, how do you combat this?

Get organised – Each day make a list of five achievable tasks you want to complete. Rank them in order of priority focusing on business critical tasks first and then non-essential items last. This helps you to determine what is most important. Know your work cycle – People have different levels of productivity throughout the day. If you are someone who can work throughout the morning but get tired by three, prioritise tasks to get the most important thing done first. Similarly, if your energy levels drop; get up, take a walk eat some fruit, drink water and raise your blood sugar levels.

Not everything is important – Learn to say no to people – if you’re in a working environment where everything is all-go all the time, the output will be poor and this attitude to work isn't sustainable long-term. Break things down into manageable and achievable amounts and don’t worry about not completing them straight away.

Don’t over-think – So much time is wasted worrying about tasks in hand. Sit down and just concentrate on doing one task for an hour at a time. Once you forget about the other things and focus for an extended period of time you will feel more productive and will start to see how much you have achieved.

Stop getting distracted - It’s easy to be distracted in an office, try not to chat about the weekend in your peak productive hours. Break the morning in half with a five minute rest and spend a few minutes talking to colleagues. Repeat this again in the afternoon. It’s essential that you take time out throughout the day, but try not to take breaks when you're in full flow and at your most productive.

Drink lots of water – One of the most important means of remaining alert is to stay hydrated. Not only is dehydration bad for your health it can affect your mood - and irritability in the office is not great. Cut out the energy and fizzy drinks – sugar is not only linked to obesity and diabetes but it can also cause adrenalin surges that can’t be sustained throughout the day.

Don’t be afraid to ask for help – There is no shame in asking for things to be clarified or explained to you. Worse is to embark on a task only to have to do it again. If you’re overworked, delegate – share the load and make life simpler for yourself.

To err is human – People make mistakes, don’t beat yourself up about it. What matters is how you react – apologise, hold your hands up and most importantly, don’t let it happen again.

Embrace your strengths and weaknesses – Understanding areas where you struggle means you can plan accordingly around this. Similarly, play to your strengths and shout about success. If you've done something of value share it with the wider business.

Eat well and get lots of sleep – If you’re stressed a lot of people feel the need to skip meals – this is the worst possible thing you can do. Get a healthy balanced diet, do some exercise and sleep. You’ll be surprised at how much better you will feel in the long-term.

Copyright © 2015 hansonregan.com / Privacy Policy

Designed by Venn Digital