--> -->

The Hanson Regan Blog

What Does it Take to be Successful?

What Does it Take to be Successful?

In partnership with Goldman Sachs, we're putting the spotlight on professionals who embody a personal mission to make things possible. Below, we talk to the Vault Team about their career journeys and what it takes to be successful.

Think back to the very first time you built something when you were a kid.

Not the house of cards that would inevitably tumble, but the very first model airplane you built yourself, or the dollhouse you created from scratch.

There’s an unbridled joy that comes with seeing something through and completing it. Often, it’s something we seek out in our careers because we want that sense of fulfillment in working towards and achieving our goals.

We chatted with a few members of Goldman Sachs’s Vault team about just that—creating things from scratch and the types of environments that promote that kind of achievement.
 

Build Something From the Ground Up

Sarvpreet Kohli, Vice President in the Consumer Banking Division of Goldman Sachs, is a scrum master for the Agile Delivery Team that works on Vault.

Vault is a microservice-based, real-time trade evaluation system at Goldman Sachs. It’s used to evaluate the company’s regulatory infrastructure and to protect the bank from any trade that would violate either regulations or their bank operational procedures. Basically, Vault manages risk across various institutional businesses (which include corporations, financial institutions, investment funds, governments, and individuals).

While Sarvpreet has worked at Goldman Sachs for just two years, he has 20 years of experience when it comes to application development. Sarvpreet was initially drawn to the start-up-like culture of the Vault team and the opportunity to help build something from the ground up.

“It’s the vibe around the floor,” Sarvpreet says. “People are really energetic, helping each other out. It doesn’t take weeks to get a decision going. Once we decide something, then we build. People are swarming and getting things done.”

The Takeaway

Have an “all hands on deck” attitude. If you’re looking to build something from the ground up, it’s going to take a lot of time and effort, so find a group of team players.
 

Work Together

A communal atmosphere wasn’t something Sarvpreet had experienced at other companies—they might have held a quarterly town hall, but initiatives would be communicated from the top down. Sure, Sarvpreet knew what was happening, but he still felt disconnected from the mission. But, because so many of Goldman Sachs’s working parts run into other departments, whether it’s design or development, relationships are established with many of the stakeholders, and the group’s mission has to be firmly embraced by everyone.

“You know what the larger initiatives are and why we are doing this,” he says. “So, you have a clear purpose and know how your role contributes to that.”

In a lot of ways, it’s the perfect environment for those who are focused on STEM-related careers.

“STEM is in the room with business, which is something I think you really want to look for in a STEM career,” says Arieh Listowsky, the Tech Team Lead for Goldman Sachs’s Bank Controls Engineering Team.

“You’re actually part of what the core business is wherever you’re working. If your role is viewed as secondary, chances are, you’re not going to have as much pride or satisfaction in your work.” And just like Sarvpreet, all members of the team know they not only have an integral part to play in the success of the product, but they’re not alone in their work. You may not be able to leave the office at 5 PM every day if there’s a problem that needs to be solved, but Arieh knows he won’t be alone in problem solving.

“There’s never the put your head down on the table, I’m all alone, I’m done moment,” he says. “There’s always somebody there who’s going to help you out and find the piece of expertise that you don’t have.”

“Really, with the whole team together, you can actually do all those things that you felt were not doable,” Arieh says.

The Takeaway

Learn how to work together toward a common goal. Think about how what you’re doing not only impacts your co-workers, but the team and business as a whole.
 

Take it Upon Yourself

Aditi Kumar, a Development Tech Lead for Goldman Sachs, remembers feeling very intimidated during those first few weeks on the job just a few years ago.

“During the last three years,” she says, “I’ve discovered that there’s this culture of mentorship where people are actually invested in not just you as a person, but also your career. To help you succeed at your own goals, and also to push you out of your comfort zone and help you discover areas that you haven’t necessarily conquered yet.”

One project, in particular, stood out because Aditi had the opportunity to take on more responsibility and spearhead work in a new area. Aditi took it upon herself to step up and start making design decisions that she wasn’t necessarily comfortable making.

“I brought that up with a couple of my mentors. And they were like, 'If you fail, you fail, but at least that’s a learning experience, and you should absolutely go ahead and do it.' So I think if I hadn’t got that bit of direction or encouragement from my mentors at the time, I would’ve kept in my shell.”

The Takeaway

Find a mentor, but learn to push yourself and be your own best advocate. Mentors will be there to help you along the way, but learning how to stand on your own is invaluable.
 

Stay Relevant

But even beyond mentorship, if you’re really trying to lock down a career that has a heavy emphasis on STEM, you have to stay ahead of the curve, technology-wise.

“You need to know what’s going on in the world, but also think about how it applies to you personally, and also in your career,” says Donel D’Souza, Senior Software Developer for the Bank Controls Engineering Team. Like Aditi, he’s only been with Goldman for a handful of years, but he understands that in order to stay at the top of your game, you need to learn about all of the emerging tools and open source technologies that are at your fingertips.

“Make sure you keep yourself updated on the technology that’s out there in a way that is going to be impactful.”

Source: The Muse

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

New Manager: How to Avoid These 7 Pitfalls

New Manager: How to Avoid These 7 Pitfalls

Being a new manager can be both exciting and daunting at the same time. You want to get off to a good start but you don’t know what you don’t know. Naturally, you bring pre-conceptions with you. Yet, the reality can be a whole different ballgame. What do you need to look out for as a new manager so you don’t get off on the wrong foot and lose confidence?

Being a new manager can be both exciting and daunting at the same time. You want to get off to a good start but you don’t know what you don’t know. Naturally, you bring pre-conceptions with you. Yet, the reality can be a whole different ballgame. What do you need to look out for as a new manager so you don’t get off on the wrong foot and lose confidence?

Some people find themselves accidental managers. Without seeking the opportunity, they are given people management responsibilities on top of their technical or functional roles. Others seek management roles out of choice, often as a way of advancing. As a coach working with new managers early in their careers, the following issues tend to come up.

Over-controlling

There is something about putting a uniform on that changes how people behave. For example, I used to work with police officers in a research department who wore suits while posted to my team. However, as soon as they put on their uniform, they would take on a different air of authority and power. Job titles can have a similar effect on people. Add the title of Manager and some people adopt an ‘I’m in charge so you have to do as I say’ approach in the mistaken belief it will command respect. Over-controlling comes from insecurity.

Instead, the role of today’s manager is to be an enabler. So your direct reports or team can do their jobs effectively. You gain respect from the trust you build and the motivating conditions you create, not from your job title.

Stuck in the team

It’s not uncommon to be promoted from a team to manage your former peers. Suddenly, they see you differently or you might see them differently. Other team members may have missed out on that promotion and be unhappy. Inevitably, your relationship is going to change with the team. The danger is never really leaving because you don’t want to upset people. It’s tough being tough with your former teammates. Especially about performance issues.

The trick is to maintain and work with the trust and rapport you already have with former teammates. It doesn’t mean losing a friendship. At the same time, set the tone for what is now different about your relationship with them. What do you expect of them? What can they expect from you? Let them know what’s important to you and the principles you stand for as a manager. Then show it in practice. Start how you mean to go on and you will soon gain that respect from the team.

Making assumptions

As a new manager, it’s tempting to think that everything is OK in the absence of feedback. If there’s a problem, they’ll come to me or they must be happy. Never assume anything. People hold back for all sorts of reasons including fear, lack of awareness and confidence.

Your role is to engage with direct reports in an ongoing dialogue that tests assumptions. Ask how things are going in a regular 1-1. What is working well and not so well? What is getting in the way of achieving their objectives? How will they resolve that problem or improve the way they work? What support do they need from you? Remember, the pastoral side of a manager’s job is just as important as the business side.

Making comparisons

One of the ways we learn about managing is through our experience of being managed. And that can be good, bad, and indifferent. You can pick up unhelpful ways from poor managers if that’s all you have known. Or, you may feel inadequate in comparison with your brilliant manager.

Learning from others can help you avoid poor habits and gain ideas for good ones. However, your aim should be to stamp your own personality on the way you manage and lead. No two managers are alike, just as no two direct reports or team members are alike. So your management style depends on you, the other person, and the situation. Self-awareness, empathy, adaptability, and flexibility are your friends.

Limiting your learning

Work is the learning and learning is the work. Harold Jarche

If you think you can learn how to be a manager solely out of a book, on YouTube or in a classroom, think again. On-the-job experience is the main way to learn. That means building on successes, making mistakes, testing, reflecting, and trying again with hindsight and insight. Coaching and mentoring help that process.

The need to know

An insecurity new managers often feel is their need to know (or be seen to know) as much as the people they manage. Especially when they bring technical or functional expertise with them. It can feel uncomfortable when your direct reports know more than you about a specific topic or technical expertise in how to do something. The temptation is to do what you like doing, interfere or do your team member’s job for them. The effects are to disempower and demotivate.

Develop your expertise in managing. Get your hands dirty by exception when the workload requires it. Aim for enough depth and breadth of your team’s subject area.  Let your team breathe.

Unclear about boundaries

Managing is not always a black and white activity. Yes, you can look up policies, processes, and procedures in the staff handbook. But managing is also about making judgements in grey areas where there isn’t a rulebook. Again, experience is a great teacher. However, there will be times as a new manager when it’s appropriate to escalate things higher or to involve your manager.

Agree on the responsibilities and accountabilities of your managerial role with your manager. Discuss the limits of your authority – what you can and can’t do. Identify the thresholds for raising issues, for example, in relation to quality, capacity, and performance.

Source: Learn to Leap

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

How to Hold a Brilliant One-to-One with Your Manager

How to Hold a Brilliant One-to-One with Your Manager

Conversations about your performance are a given if you are an employee. You are hired to do a job and your employer wants assurance they made the right decision. They want you to succeed and grow in return for your valuable contribution. However, the quality, frequency, and format of performance conversations vary widely in the workplace. At the core is the relationship with your line manager. So it pays to give proper attention to your one-to-one. Here are some tips.

Conversations about your performance are a given if you are an employee. You are hired to do a job and your employer wants assurance they made the right decision. They want you to succeed and grow in return for your valuable contribution. However, the quality, frequency, and format of performance conversations vary widely in the workplace. At the core is the relationship with your line manager. So it pays to give proper attention to your one-to-one. Here are some tips.

It’s a partnership

The relationship between direct report and manager is changing (slowly) from a parental one to more of a partnership. The shift is about control – less micro-management, more autonomy, and empowerment. Today, effective managers are enablers, helping people to realise their potential, use their talents, and grow. Not purely for altruistic reasons, but also to deliver for the team, the section, the business.

Perhaps the title and label of ‘manager’ need a rethink. Managers are not always popular because of perceptions that they constrain, poor skills, and self-interest driven by pressure from above. Ideally, the relationship works best when the roles of direct report and manager are complementary. For example, a software engineer will know the ins and outs of a particular piece of software. Often, a manager will be less familiar because they don’t need to work with it every day. However, they will have other knowledge, skills, and experience that help the software engineer to achieve (context, strategy, coaching, client relationships etc). The learning and mutual support are two-way.

Why have a one-to-one?

People want to be valued and to make a difference through their contribution. Having a conversation once or twice a year as part of a formal process does not meet those wants. Also, it makes no sense for an employer who needs performance to be agile in response to change. Regular one-to-one performance conversations help you to stay on track and adjust if necessary. It’s a space to build and maintain a trusting relationship in a psychologically safe place. You hold yourself to account and are accountable to your manager, so you can do and be at your best.

Research shows that employees want six things from a frequent one-to-one:

  • Goal setting
  • Goal review
  • Performance feedback
  • Problem-solving
  • Soliciting support
  • Problems with colleagues

As a manager, take a holistic view of the person both inside and outside work. Add wellbeing and professionalism to that list.

What’s it about?

This type of performance conversation is not about your day-to-day to-do list. It’s not the chat across the desk. Instead, for half an hour every 2 to 4 weeks, you are stepping back from the action to talk informally about how you are doing and feeling. Expect or encourage a coaching style from your manager. What’s on your mind? How well are you meeting your commitments? What is holding you back? How can we resolve issues or improve things in the short-term? Agree on a short, clear agenda and create actions together.

However, if your manager starts cancelling one-to-ones, it’s a sign they or you are not a priority. And it will reduce your productivity, which is in no-one’s interests. Don’t accept interruptions either. Otherwise, it’s time for an assertive conversation. Here’s how not to shoot yourself in the foot.

The difficult bits

People are not difficult, their behaviour can be. Separate the person from what they do. As a manager, things always go better when you go into a performance conversation with a listening mindset and a positive intent. Check out these tips if you want to be more capable and confident holding tricky performance conversations.

As a direct report, the balancing act is between being assertive and being tactful. Speak up when facing any of these weasel words from a poor manager. How can you support your manager in a way that helps you both? You will need to manage your emotions before, during, and after the conversation. So, plan, rehearse and pause in the moment.

In summary, a regular conversation about your performance and growth is a healthy personal, professional, and business enabler. Ensure you get the most out of your one-to-one by applying the tips in this post!

 

Source: Learn to Leap

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Lack Experience? How to Show Your Potential to Employers

Lack Experience? How to Show Your Potential to Employers

You decide to go for a job you really, really want. Thankfully, you get through the initial hurdle of CV or online application. Now you worry that you lack experience at the level for which you are applying. That old friend, imposter syndrome, pays a visit chipping away at your self-confidence. Other people will have more experience. They will find me out in the interview. I’ll look a fool. Sound familiar? Here are three ways to get off the back foot and give it your best shot by showing your potential to employers.

You decide to go for a job you really, really want. Thankfully, you get through the initial hurdle of CV or online application. Now you worry that you lack experience at the level for which you are applying. That old friend, imposter syndrome, pays a visit chipping away at your self-confidence. Other people will have more experience. They will find me out in the interview. I’ll look a fool. Sound familiar? Here are three ways to get off the back foot and give it your best shot by showing your potential to employers.

Show how you learn

Remind yourself, no one is ever the finished article. Neuroscience is showing our neural pathways do renew. We all have the potential for more in our lives. Moreover, you will have shown your potential in the past. Think of the times when you faced something new and, subsequently, you were successful. Now reflect on how you went about it. Identify the obstacles, describe your approaches for overcoming them, what you did and how you felt. What skills and behaviours did you use? What mindset helped? Also, show genuine humility. What did you struggle with and not achieve? Give reasons. What would you do differently now?

Employers want curious, adaptable, and resilient learners. So, show them your learnabilityand what they would be getting if they hire you.

Put forward your ideas

Sometimes, you will face questions at the job interview that throw you. Either you don’t expect them or you don’t have an answer or it hits a vulnerable area if you lack experience. The trick is to turn the situation to your advantage. When it’s beyond your experience, say so. Don’t pretend because they will see through you. Then talk about how you would deal with that situation or show your creativity through your ideas or ask a perceptive question. Consequently, the employer can see how much support you might need. They can see how large or small the gap is. The better your approach, the smaller the likely gap.

Employers want resourceful, can-do, and creative contributors. Don’t sit back, lean forward and reveal your emerging talents.

Be yourself with skill

Finally, how you present yourself can clinch the decision in your favour. The big mistake is to give answers that you think the employer wants to hear. Instead, be yourself with more skill. Your passion will shine through when you focus naturally on what you truly believe or feel strongly about. However, you do need to invest time reflecting so you can articulate your USP. Then back it up with evidence of the difference you make and apply it to the job opportunity in front of you.

Employers want energy, commitment while you’re with them, and self-confidence.

To summarise, entice them with your potential. Give them relevant evidence of what talents you have begun to show. Let them ‘see’ you in that role and their organisation. Paint a picture of how your talents can be turned into strengths for the benefit of the employer given the opportunity.

If you lack experience, don’t let it inhibit you. Reframe it as potential and let your presence be memorable!

Source: Learn to Leap

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Introvert Issues: How do I promote myself?

Introvert Issues: How do I promote myself?

Do you struggle as an introvert to promote yourself in today’s competitive and networked work environment? If so, this post offers five ways to help you build your confidence and be more at ease with the demands of an extroverted world.

The idea of self-promotion tends to be anathema to an introvert. Often, it frightens the hell out of someone because they perceive that it’s about pretending to be an extrovert. Some also see it as unattractive to put yourself forward (especially at the expense of others) because it smacks of bragging. They prefer to put others first or for the collective to shine.

I should know because that was me when I started out solo in business nine years ago. However, I have learned through experience and reflection on what works to promote myself without compromising integrity or changing personality. Here are five approaches that you might find helpful to adapt to your situation.

Building trust

Introverts prefer more intimate one-to-one relationships than having superficial acquaintances within groups. I like to get to know people before opening up too soon about myself. When I know that we have mutual trust – integrity and capability are clear – then I’m comfortable enough to share in more depth. That’s a process that can take time, although occasionally you do meet someone where you hit it off straight away. Even if it’s love at first sight, don’t expect to get married on the first date!

Test the degree of trust so as to play to your natural desire for a small number of quality relationships. You will find it so much easier to articulate your worth with someone you have got to know, like and respect. Because they won’t be judgemental. Instead, they will encourage you and be a critical friend who you can depend on in your best interests. For example, signposting you to helpful resources, making introductions in support of your job and career, or giving without expecting something back.

Aligning values

Why do we like working with some people more than others? Sometimes, it’s about the complementary nature of two personalities. Like jigsaw pieces that fit together seamlessly. In my case, I warm to people with similar personal values to mine.

Be aware of your values set and uncover those of the other person through observing behaviours, probing with questions, and listening as much to what is not being said as to what is shared on the surface.

Play to your introversion by ensuring your actions speak louder than your words. Be consistent with your values, insistent through focus, and persistent to achieve your goal.

Influencing, not selling

Introverts dislike selling themselves overtly. Partly, because they perceive ‘selling’ as pushing yourself as a ‘product’ on to an unwilling ‘customer’. Today, cold-calling and unsolicited sales approaches are unwelcome. Instead, people expect genuine engagement to establish a positive relationship and to have an enjoyable ‘experience’. This plays into your hands as an introvert and desire for intimacy and authenticity.

You can do the same through influencing people about how they see and value you. Ask yourself what you want to be valued for. It might be for your expertise, your contribution or to show your potential. How? Focus on pulling people towards you rather than pushing yourself on them. For example, by sharing useful information, knowledge or insights with whom you want to build a relationship, by taking an interest in their world, by showing empathy, and through curiosity about their challenges. Over time, invest in creating healthy social capital so your relationship becomes based on mutual benefit and goodwill.

Also, who are your raving fans, the people you know well who won’t think twice about advocating you? Earlier in my career, I was at my best as a dependable, organised and supportive Number Two to senior managers. I got things done quietly and effectively. However, outsiders couldn’t immediately see that. Fortunately, my managers recognised my strengths, trusted me, and qualified me to others. In the digital age, a great way to be recognised and feel less self-conscious is to get and give recommendations on LinkedIn with people who can vouch for your talents and achievements.

Integrating your online and offline presence

As an introvert, I find the shift to the network era liberating, enabled by technology and leading to quality, more than skin-deep, relationships beyond the screen. Changes in technology mean we can connect with vast numbers of people in an instant. From one-to-one to one-to-many, we all have the potential to establish our credibility and influence in our own inimitable way in an array of formats.

Take a strategic as well as a tactical approach to strengthening your presence. What digital media and formats suit your message, skills, and personality at their best? Online builds relationships and credibility from one to many, offline cement the relationship from one to a few. That’s how you integrate the two while remaining consistent and true to your identity.

 

Being an introvert with skill

None of this is about changing your personality to try and become something that you are not. Don’t apologise for being an introvert or hide it. Respect your natural preferences and use them to your advantage. Flip other people’s perceptions and demands of you. Focus on what you do bring and share how they can get the best from you. Develop yourself by taking small steps to hone your skills and to adjust your mindset and behaviours. Finally, check out this helpful post by Sacha Chua on the shy connector.

Source: Learning to Leap

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

5 Things on Your Social Media That May be Damaging Your Employability

5 Things on Your Social Media That May be Damaging Your Employability

Here are five things to avoid doing on your social media accounts that might deter employers from contacting you.

Over the past couple of decades, social media has exploded in popularity. With smartphones so readily available, it’s never been easier to be online and stay connected to your friends, family and content of your interest. However, due to the growing culture of social media, it was surveyed, in 2017, that as much as 70% of employers vet potential candidates online before offering them an interview - a 10% increase from 2016. This is not surprising as a US survey indicated that 80% of Americans were using social mediain 2017. This means that your online presence is more important than ever and could be the difference in landing your dream job. Here are five things to avoid doing on your social media accounts that might deter employers from contacting you.

  1. Poor Spelling - this is easily avoidable and a good habit to get into in general, employers will not be impressed and are in fact more likely to laugh if your Facebook status is riddled with spelling errors or contain unironic colloquialisms such as ‘mad ting’ or ‘sik bruv’. Keep it simple and well-spelt to give off intelligent signals.
  2. Deleting Social Media - many people, in fear of their social media accounts being viewed and judged by employers opt to delete them during a job search. However, due to the way employment is going it is important to show that you understand and can use social media effectively, especially when you consider some futuristic jobs on the horizon. Instead of deleting your social media account, have a sweep through deleting any questionable content and like companies that you’d like to work for and join conversations regarding new opportunities advertised online - future employers will notice this!
  3. Offensive Content - what you have to bear in mind for this point is that what is offensive is subjective. The post you might share from Ladbible about someone getting so drunk they run into a wall or throw up over themselves might get a few giggles but it wouldn’t be appropriate in a working environment. The content you decide to share says something about you as a potential employee so be careful.
  4. Swearing - while we don’t like in Victorian times and swearing is something you’re likely to come across every day, it is always better to keep language like this off social media. Choosing to post profanity as opposed to leaving it out or using a different word shows a lack of vocabulary and care in regards to offending certain people.

Negative Opinions - everyone is entitled to have negative opinions and you’re free to post them, but bare in mind that rants regarding previous employers or employees are not going to be taken with a pinch of salt by your future bosses. If you do feel the need to post a negative status or Tweet, just bear in mind the previous points and say it intelligibly and research the topic to show that you’re informed. Opinionated people aren’t necessarily looked down on by employers as they are often free thinkers and good problem solvers, but a quality that must go with that is a cool-head and ability to discuss and take counter arguments without responding aggressively, which might be put to the test as negative opinions are more likely to attract argumentative comments.

Source: Social Hire

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Will AI Replace Recruiters?

Will AI Replace Recruiters?

Some recruiters view the growing presence of AI in talent acquisition as a threat. There is a belief that AI-powered tools will steal jobs and eventually take over (the world), leaving the recruiter all but extinct. However, I’m here to argue that AI-powered recruiting tools will actually help recruiters by liberating them from the menial in favor of the relationship-building, problem-solving and people assessment skills they were likely hired for. And this is great news for candidates, too, as we will soon see. But first, let’s take a look at how AI-powered tools fit in to the recruiting process so that we can better understand which tasks AI is replacing

Some recruiters view the growing presence of AI in talent acquisition as a threat. There is a belief that AI-powered tools will steal jobs and eventually take over (the world), leaving the recruiter all but extinct. However, I’m here to argue that AI-powered recruiting tools will actually help recruiters by liberating them from the menial in favor of the relationship-building, problem-solving and people assessment skills they were likely hired for. And this is great news for candidates, too, as we will soon see. But first, let’s take a look at how AI-powered tools fit in to the recruiting process so that we can better understand which tasks AI is replacing.

AI-powered recruiting tools are everywhere!

For almost every stage of the candidate journey (from sourcing to engagement to screening to interviewing to everywhere in between), AI-powered tools are being deployed to improve the recruiting process and overall candidate (and recruiter and hiring manager) experience. Companies have leveraged the ability to automate tasks in order to make the lives of the recruiter easier. Let’s take a quick look at how AI is stepping in at each stage of the recruiting process:

Sourcing

Companies spend a lot of money trying to find the right candidates especially since sourcing typically requires a high volume of outreach. But with the right technology, a sourcing specialist’s time can be optimized to the point where he or she is only spending time with job seekers that are more likely to be qualified. That’s where AI comes in.

A few AI technologies out there, like Hiretual, take advantage of the fact that there is mounds of data on all of us scattered across the web. But the key is finding and making sense of all this data which is exactly what their AI technology is built to do. Their AI technology will scour the web of over 700 million professional profiles to find only candidates that meet certain conditions (like skills or job titles) that you select and return those candidates to you allowing you to follow up.

The next step is conversations and screening.

Conversations and screening

Whether a candidate comes into your funnel from your sourcing, recruitment marketing or employment branding efforts, the next step in any highly-functioning recruiting funnel is engagement. And one of the most effective ways to do this is through starting a conversation with a candidate either through a live chat or with an AI-powered chatbot.

Now we’ve all heard of chatbots. Chatbots can be found in marketing, customer service and even in the personal finance industry! The main purpose of the chatbot, regardless of the industry, is to start a conversation with a human on the other end. In recruiting, this human is called a job seeker or candidate who is likely interested in a company or role and wants to start a conversation to learn more. Chatbots can certainly help.

AI-powered chatbots, like our very own, can start conversations with job seekers and candidates right on your careers site or job requisition landing pages at any time. These chatbots for recruiting are great for a few reasons:

  • They can capture a job seeker’s contact information so you can follow up later.
  • They can conduct basic screening so you only engage with the most qualified candidates.
  • They can automatically schedule an online chat with your recruiters and hiring managers at a later time.
  • They never sleep!

But as we’ll discuss shortly, the AI-powered chatbots are there to help, not replace, the recruiter and move candidates more quickly to interview.

Interviewing

AI interviewing is still in the early stages but there are already companies, like Robert Vera, offering this service. While the AI interviewer is clearly not a human, there are some advantages to deploying a technology like this. For one, it keeps all the interviews structured. Same questions. Every time. And we know from plenty of research that the structured interview is one of the most effective hiring techniques.

Secondly, AI interviewing technology can help you get through high volumes of interviews. The robot never gets tired. It will “take notes” on all responses. And it will assess candidates only based on the candidate’s answers. This is great.

So with all these great AI products in sourcing, engagement and interviewing, will interviewing robots, AI-powered chatbots and AI for sourcing replace humans? The answer, I believe, is not any time soon.

Will AI replace humans?

As I was taught in school, the answer to this question depends. Specifically, it depends on the time frame. Will AI replace recruiters in the far off future? I’d say it is likely. But in the next 5-10 years? No way. Right now, AI is very good at a narrowly defined set up tasks. Just as the computer is superior to all humans at arithmetic, AI is superior to recruiters at tasks like pouring through 700 million professional profiles looking for keywords and returning those profiles to the recruiter.

To better understand this concept, let’s dig a little deeper on the arithmetic analogy. Of course, anyone with a middle school education could theoretically calculate any addition, subtraction, multiplication or division problem given enough time. But the computer excels at this task for a few reasons. Firstly, if there is a significant number of calculations to be done, time is not a problem for the computer. Secondly, the computer will make far fewer errors if it is programmed correctly. And lastly, and somewhat related to the penultima reason, the computer does not get exhausted, which may make it more prone to making mistakes.

All of these reasons apply to the “AI versus the human recruiter” debate. Take chatbots, for example. Chatbots will never get tired asking pre-screening questions or chatting with job seekers and candidates. AI that scours the web for candidates that match your ideal candidate profile won’t get blurry-eyed staring at screen after screen of LinkedIn profiles. And the interview robot won’t be tempted to ask questions out of order (thereby ruining the structured interview technique).

But unlike humans, AI is limited to these tasks. AI may be able to perform facial recognition but can it create a painting? AI may be able to recognize speech but can it write a book? The answer to both of these questions, at least for now, is no. Similarly, AI can have basic conversations with candidates but it is the recruiters who can do the relationship-building and assessing of the candidates. That is the true power of the recruiter. And the current use cases of AI should only free up the recruiters to do more of this relationship-building by allowing them to have more meaningful conversations with candidates.

Why You’re TERRIFIED To Find A New Job (Even If You’re Completely Miserable)

Why You’re TERRIFIED To Find A New Job (Even If You’re Completely Miserable)

You hate your job. You find yourself complaining about it daily to your family and friends. Every Sunday night, you tell yourself that you’re going to finally quit and find a new job because you just can’t take it anymore. … But you don’t.

You hate your job. You find yourself complaining about it daily to your family and friends. Every Sunday night, you tell yourself that you’re going to finally quit and find a new job because you just can’t take it anymore. … But you don’t.

Instead, you go to work, come home, complain, and start the whole cycle over again. You’re completely miserable in your current job, but you’re absolutely terrified to find a new job. But why?

You’re afraid of the unknown.

Yes, starting a new job can be scary. You have to adapt to a new work environment, make new work friends, and even learn some new skills – and you don’t know if you’ll even like it after everything’s said and done. What if it turns out to be worse than your last job? What if they don’t like you? What if you don’t fit in? What if you don’t perform on the level they expected? It’s similar to starting at a new school where you don’t know anyone, where anything is, or how your teachers are going to be.

The truth is, starting a new job can be intimidating. You’re walking into a new situation and you’re not sure what to expect. The best thing you can do is get to know the company as much as you can before accepting a job there. Learn it inside out, make an effort to get to know people you’d be working with over LinkedIn or coffee, and ask questions that can give you insight on the company culture.

You’re not confident in what you have to offer.

Don’t feel like you’ve got what it takes to make it anywhere else? Afraid to find a new job because you don’t want to look like an incompetent employee? If you think you’re lacking the skills to succeed elsewhere, take an inventory of your skill sets then compare them to the skill sets that are required for the jobs you’re considering. What are you missing? Where do you need to ramp up your skills? Do you have additional skills that could lend themselves to the job? How and why?

You’re not really sure what you have to offer.

You need to understand what you have to offer so you can market yourself effectively to employers. Again, go in and take a look at your skill sets. Think about past accomplishments at work. What have you achieved? What are you proud of? What problem do you solve at your current company?

You don’t know what you want to do next.

You want to find a new job, but you have no idea what you want to do. All you know is that you hate your current job and you want out. If you’re having trouble figuring out what you want to do next, you need to take some time to explore. Research different jobs, industries, and companies. Talk to people about their work, why they like it, hate it, and what excites them about it. Take some time to figure out what interests you and what projects energize you.

You’re afraid of the financial repercussions.

What if you don’t get the benefits you have at your current job? What if you have to take a pay cut? What if it takes too long to find a new job and you run out of money? Research competitive salary rates using the Glassdoor Know Your Worth salary calculator before you look for a new job. Research the companies you’re interested in to learn about what kinds of benefits they offer employees.

It’s important to understand what your priorities and must-haves are in your new job. The last thing you want to do is accept a job knowing that it won’t meet your needs because it will just result in you looking for a new job in a few months. However, understand that you might not necessarily make the same paycheck as your current job. Research so you know what to expect.

Source: WorkItDaily

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

4 Things Recruiters Are Looking For When They Search You Online

4 Things Recruiters Are Looking For When They Search You Online

Recruiters are lurking in the depths of your social media profiles. Are they seeing what you want them to see?

It’s no secret that recruiters are looking up candidates online before they move them forward in the hiring process. It makes sense, though. I mean, who DOESN’T look people, places, or things up online before they commit to them? If you don’t, then welcome to 2017.

According to a recent survey, 92% of recruiters use social media to find high-quality candidates. And if that doesn’t get you hyper-aware of what’s out there about you online, this will: Almost 70% of recruiters have rejected candidates based on the content found on their social networking profiles. Woof.

There are certain things recruiters are looking for when they search you online. And, if you want to make a great first impression on these recruiters, you need to do some recon work. Is there anything out there you DON’T want them to see? If so, take it down.

While you’re cleaning things up, you should take some time to give them what they want, too. During these online searches, recruiters are eager to learn certain things about you. There are some things recruiters are looking for when they’re checking out your online presence. So, it’s important you those things easy for them to find. They want to know that…

1. You know your stuff.

If you’ve been bragging that you’re an expert in whatever it is that you do, you better back it up. What proof do you have that what you’re claiming is true? You know recruiters, employers, and clients are going to be looking for you online, so have something to show them. Brand yourself as an expert in your field by starting a blog or creating an online portfolio of your work.

2. You’re not bad-mouthing your former employer.

If you’re trashing your old boss, colleagues, or company all over the Internet, you need to sit down because (surprise) recruiters are not impressed. In fact, they’re thinking something like this, “If we hire them and, for some reason, they don’t work out, what if they bash US all over the Web? That’s not a good look for our brand…” So, please don’t broadcast your woes all over the Internet.

3. You have a personality.

Now more than ever, companies are hiring people based on their “fit” instead of just their work experience. Employers are realizing that hiring the wrong person can completely throw off a team dynamic, and cause workplace issues that can affect the business. So, finding people who share the same values, passions, and goals is becoming more and more important.

4. You’re not posting about inappropriate stuff.

This is a huge red flag for employers and recruiters. According to a recent study, employers have little tolerance for bigoted comments and mentions of illegal drugs. Stay clean, my friends. (The good news? They don’t care so much about your beer pong photos anymore — as long as drinking isn’t the only thing you post about. So yay for that.)

These are just a few things recruiters are looking for when they search you online. Of course, each recruiter, company, and industry has different things they want in a job candidate, so make sure you do your homework. That way, you can prove that you’re a great candidate to bring in for a job interview.

3 HUGE Problems With Your Networking Strategy

3 HUGE Problems With Your Networking Strategy

Want to make tons of valuable connections and build a solid network? Of course you do! If you’re struggling to make more career friends and professional connections, you probably need to take a second look at your networking strategy.

Are you making any of these mistakes?

Want to make tons of valuable connections and build a solid network? Of course you do! If you’re struggling to make more career friends and professional connections, you probably need to take a second look at your networking strategy.

Are you making any of these mistakes?

1. You’re A Selfish Networker

When you’re networking, do you go in thinking, “How are you going to help me?” or do you go in thinking, “How can we help each other?”

Networking isn’t all about your needs – it’s about the other person’s needs, too. Yes, you have a goal: you’re looking for someone to help you get ahead – someone who can give you the right introduction, but the people you’re networking with are trying to do the same thing.

If you’re having a conversation with someone and making it all about you and your needs, you’re probably not going to get too much support from the other person. They also have goals, and if they think you’ll just take advantage of their network without anything in return, they probably won’t be too open to working with you.

The key is offering value. Before you ask, you should always offer. Whether it’s a relevant connection or simply a relevant article, offering your support early in the game will prove you’re going to be a valuable connection in the future. It will also encourage people to return the favor somehow – and that favor could be introducing you to someone in their network.

2. You Wait Too Long To Follow Up

Don’t you hate it when you had a great conversation with someone and you never hear from them again? How about when they just wait too long to reach out to you and you don’t remember what you talked about? UGH, me too!

Don’t be that person. It’s just bad networking. Follow up within 24 hours of your conversation and briefly refresh them on your conversation. Then, make a note of your conversation so you can remember why you connected later on.

3. You Don’t Keep In Touch

Another thing I bet frustrates you just as much as it frustrates me is when people only reach out to you when they need something from you. Talk about feeling used!

It’s important to nurture your network, even when you don’t need it. Sending a friendly email to see how business is going or sending an article you think they might find interesting is all you need to help stay fresh on your connections’ minds. Then, when you do need their help, they won’t feel like you’re just reaching out to them right out of the blue, and they will be more likely to help you out.

So, how does your networking strategy stack up? If you’re making any of these mistakes, you should reevaluate. Best of luck, and happy networking!

Are you sick of feeling AWKWARD when you’re networking?

There are a lot of people who HATE the thought of networking. It can seem intimidating, fake, and hard to do. But the reality is that networking is a key skill you need to learn in order to be successful – no matter what you do.

 

Source: WorkItDaily

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

‘What’s Your Biggest Weakness?’ How To Answer Common Interview Questions

‘What’s Your Biggest Weakness?’ How To Answer Common Interview Questions

Even though it’s one of the most common interview questions out there, everyone hates being asked, “What’s your biggest weakness?” during a job interview. It’s hard enough showing your potential during an interview. How are you supposed to sell yourself to employers when they want you to tell them what’s wrong with you?

Thankfully, answering this question isn’t as hard as you might think. And, you can actually use it to show you potential if you respond strategically.

Even though it’s one of the most common interview questions out there, everyone hates being asked, “What’s your biggest weakness?” during a job interview. It’s hard enough showing your potential during an interview. How are you supposed to sell yourself to employers when they want you to tell them what’s wrong with you?

Thankfully, answering this question isn’t as hard as you might think. And, you can actually use it to show you potential if you respond strategically.

How To Answer “What’s Your Biggest Weakness?” In An Interview

Don’t lie or come up with an answer you THINK might impress the interviewer (like “being a perfectionist” or “working too hard”). Instead, focus on a skill you’re trying to advance.

For example, let’s say you’re interviewing for a training coordinator role at your favorite company. You love developing training materials and teaching others, however, you get very nervous when delivering your presentations because public speaking isn’t your forte.

Instead of trying to sweep this under the rug, address it, but ease the interviewer’s concerns by sharing what you’re doing to overcome this challenge.

Here’s an example answer:

“I have to admit that public speaking has always been difficult for me because I’m an introvert. It makes me nervous to get up in front of people and talk. However, I’ve learned that this was an integral part of training others, which I love doing. So, I’ve been working hard to improve my public speaking skills by participating in monthly Toastmasters meetings as well as taking on volunteer training sessions for colleagues so I can get some extra practice. Since challenging myself to do this, I’ve noticed a big difference in my confidence level and have felt more capable than ever in my role as a trainer.”

Essentially, you want to convey that you understand you’re weak in one area, but to make up for it, you’ve been working hard to improve that area because you know it’s important in your role.

There’s no need to give a long explanation for this question. Keep it simple and straightforward, and focus on the positives rather than dwell on the negatives.

Don’t get stumped by common interview questions like this one. Instead, go in prepared. The first step is to be honest with yourself and tailor your answers so you can market your skills rather than detract from your potential.

 

Source: Workitdaily

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

5 Ugly Myths About Changing Career in Your 30s

5 Ugly Myths About Changing Career in Your 30s

Traditionally, being in your 20s is seen as a time to be footloose and fancy free, to conclude your education, to explore your career options and to figure out what you want to do with the rest of your life. But by the time you turn 30, it’s generally expected that you’ll be working on ways to advance on your chosen career path.

Traditionally, being in your 20s is seen as a time to be footloose and fancy free, to conclude your education, to explore your career options and to figure out what you want to do with the rest of your life. But by the time you turn 30, it’s generally expected that you’ll be working on ways to advance on your chosen career path.

However, if you find in your 30s that your career isn’t fulfilling, you don’t have to spend the rest of your life dreading the sound of your alarm clock — there’s still time to shift gears and go in a totally different direction. You just have to be prepared for naysayers — even well-meaning friends and family members — who will question your judgment.

Here are five myths you can expect to hear cited by these naysayers, along with helpful advice for successfully changing careers in your 30s.

‘That’s Totally Impractical/You Should Know What You Want to Do’

This common myth is based on the fear of change, which can lead you to stick with a decision and its resulting course regardless of whether it’s making you unhappy. Just remember that it’s totally acceptable to change your mind. “When you were 5 years old and someone asked you what you wanted to be, do you still want to do that? Chances are, probably not,” says Becca Shelton, assistant director for career services at the University of Richmond. Shelton works with adult learners, alumni and experienced professionals who are seeking career guidance.

“Our ideas change, our vision for ourselves changes over time, and that’s one of the beautiful things about being a human being,” Shelton says. Most people spend at least 40 hours a week at work, which is more than 2,000 hours a year. “That’s a lot over a lifetime, so you should ask yourself if your job allows you to use your strengths and be the best version of yourself,” Shelton says.

One person who knows something about change is Cortney McDermott, a TEDx speaker, strategist to Fortune 500 executives and entrepreneurial leaders and the author of “Change Starts Within You: Unlock the Confidence to Lead with Intuition.” Before she became an entrepreneur, McDermott was an executive at Vanity Fair Corp. and Sustainability Partners, a professor of graduate studies for a Big Ten university and a global associate for beCause Consortium.

“When we start to listen to our intuition — that inner force that urges us to change and grow — we have to be prepared to meet with other people’s fears, as well as our own ingrained ideas about what’s ‘practical’ or ‘realistic,’ ” McDermott says. “If this myth is plaguing you now, see if you can find one or more sources — such as podcasts or books — or people to reinforce your confidence in what’s possible.”

McDermott says she has used this technique to reinvent herself several times. “Remember: realists don’t change the world. Unrealistic people do,” she says.

‘You’re Too Old/It’s Too Late’

Who gets to determine when it’s too late to change course? “When I was working as a corporate executive, I dreamed of becoming a writer,” McDermott says. The few people who she confided in always expressed doubt about such a major change. The consistent message was that she should stick with what she was doing. “Luckily, I didn’t — but what I did do was to start small, dedicating a morning window for this passion every day before work and often again in the evenings.” McDermott says her story offers proof that it’s never too late.

Here’s something else to consider: Shelton notes that people in their 30s probably aren’t far past the halfway mark to retirement. “With the workplace being more fluid, so are skill sets and how they are applied to different jobs and careers,” she says.

‘No One Is Going to Hire You’

Changing jobs in your 30s is one thing, but changing careers is a different concept. How will employers view a job candidate in this age group applying for their first job in this field? Probably the same way they view everyone else — and the hiring manager might be impressed that you have the guts to follow your dreams.

“When preparing for the interview, identify your transferable skills that would be related to your target industry, and be able to talk about how you used those skills,” says Cynthia Saunders-Cheatham, assistant dean of the career management center at Cornell University’s SC Johnson College of Business.

Saunders-Cheatham recommends networking to find jobs. “Leverage your alumni network. Schedule informational meetings. Take people out for coffee and ask questions about what they do, trends in the industry, company goals and challenges.”

Another key is to embrace LinkedIn. Saunders-Cheatham says it isn’t enough to just set up the basics on the site. “You need to tailor your profile to the role and industry and highlight keywords that are relevant to the industry so that recruiters can find you.”

Her other LinkedIn tips include the following:

  • Set alerts.
  • Follow relevant companies.
  • Join relevant groups, including your alumni and industry groups.
  • Learn how to use LinkedIn to find contacts in specific fields and reach out to them for information.
  • Use the site’s new mentorship platform.

‘If You Get Hired, You’ll Have to Start at the Bottom’

The naysayers will say you’ll have to take an entry-level position, so you’ll be starting over and spending years trying to get re-established. “While it’s unlikely that you will jump right into a senior level position, don’t ever dismiss the amount of experience, skills and talents you have developed throughout your career so far,” Shelton says. “Think of your skills as a tool box — what’s in your tool box and how can you help employers solve problems?”

‘You’ll Have to Go Back To School, Which Is Expensive and Will Take Too Much Time’

Changing careers can indeed require additional training and education, but it doesn’t have to mean a new four-year degree. “Maybe there is a certificate you can pick up, or other training that will give you an edge, but this is all part of your story,” Shelton says. “It is important to know your story, own your story, and articulate that to others.”

If you know you’ll need to go back to school full time, she recommends that you start making plans. “Know that there are many flexible educational programs available for those working full time who want to expand their knowledge and marketability.” Some programs are offered online, and some are at night or on the weekend, making them more likely to fit your schedule. There also are grants and scholarships available, based on your major, location, age and other factors.

Changing careers in your 30s might not be easy, but it can definitely be accomplished. Now that you know the myths — and the truth — you can make an informed decision.

Source: Talent Culture

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Do Open Offices Kill Collaboration?

Do Open Offices Kill Collaboration?

Open workspaces may actually decrease face-to-face interactions, according to a new study by Harvard researchers on how open workspaces affect human collaboration. In the study, employees wore sociometric badges that measured their actions. The findings showed that in open workspaces, face-to-face interactions decreased by about 70 percent, while electronic interactions increased.

Open workspaces may actually decrease face-to-face interactions, according to a new study by Harvard researchers on how open workspaces affect human collaboration. In the study, employees wore sociometric badges that measured their actions. The findings showed that in open workspaces, face-to-face interactions decreased by about 70 percent, while electronic interactions increased.

The point of open offices is to remove barriers and foster a collaborative environment, but does all that open space actually produce the opposite effect?

Defining Open Office Spaces

Not everyone is convinced you need to rush to build walls in your open offices based on these findings. “I hate to sit on the fence, but it depends on your definition of ‘open office,’ ” says Brent Zeigler, president and director of design at Dyer Brown, a Boston-based architectural firm.

“If we are talking about a setting where the only areas for working — meetings, collaborating, heads-down work or any productive task — is in the open with no walls, no dividers and no separation, then I would say that kind of open office will likely hinder collaboration,” Zeigler says.

“However, if the definition of an open office describes a workplace in which only a few — or none — have enclosed offices and the remainder is primarily workstations with an appropriate amount of space for meeting, collaboration and/or private or sensitive conversations, then I believe that workplace collaboration would be enhanced or improved.”

Some companies could be applying the wrong terminology to their workspaces. For example, Lynnette Holsinger, president of the HR Florida State Council, says most of the open office spaces that she’s seen don’t fit her definition of being open in terms of collaboration. “The companies define them as open because there are no ceilings and doors, but there are cubicle walls.”

In a Robert Half survey conducted last year, 65 percent of workers agreed that open plan offices contribute to collaboration. However, 60 percent also believed that private offices were conducive to collaboration, and 68 percent felt the same way about semi-private cubicles. The highest percentage, 69 percent, thought a combination of open and private spaces was good for collaboration.

The Privacy-Disruption Factor

It’s possible that open offices may be hindering collaboration because employees are concerned that other workers could hear their conversations. There will always be a need for privacy, but according to Ashley Dunn, director of workplace at Dyer Brown, we may need to change how we think about privacy needs in the workplace. “Fifteen years ago it was common in most markets for everyone to have an office, giving an employee privacy 100 percent of the time even if they only needed privacy 30 percent of the time.”

If you only need privacy 30 percent of the time, Dunn says the office is not being used optimally 70 percent of the time, which contributes to a lack of connection between co-workers.

“Open layouts flip that notion on its head: If you need privacy 30 percent of the time for confidential conversations or heads-down work, you should be able to find a space that is private when you need it and within reasonable proximity to your desk,” Dunn says.

“That room may take up 60 square feet instead of a 120-square-foot private office and serve the privacy needs of several employees instead of only one.”

Collaboration vs. Other Factors

While the Harvard researchers’ study only addressed collaboration, companies considering this type of design should also weigh other factors. For example, some employers might like open office plans so they can “keep an eye” on workers. “People can look busy without being more productive, so open work spaces do not guarantee increased productivity,” Holsinger says.

Eighty-six percent of respondents in the Robert Half survey felt that having a private office helps productivity, compared with 51 percent of employees in semi-private cubicles and 48 percent of those working in an open floor plan. “While some people can be very productive in a completely open workspace, I don’t think this is the norm,” Zeigler says. “The majority of employees are most productive in a setting that supports all of the different tasks that they need to complete in a day.”

Designing a progressive workplace should take into account other variables as well.

“Goals for high-performance workplace projects might include increasing transparency between managers and staff, reducing the number of private offices and using the space saved to program team rooms, or eliminating hard walls in favor of flexible design that responds quickly to a company’s growth and evolving needs, especially in fast-paced industries,” Dunn says. “The goals of each organization will be different, and face-to-face collaboration is important, but it’s not the sole objective for every new workplace.”

In the final analysis, creating collaboration may be based more on the company’s culture than the physical office space. “Unless a collaborative culture has been nurtured, it may not increase collaboration — in fact it could cause co-workers forced into this environment to be even less collaborative and feel defensive,” Holsinger says. She says a fully open office should be used only within departments or departments that have to work together, and only when there’s a strong collaborative culture in place.

 

Source: Talent Culture

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

#WorkTrends: The Way We Work

#WorkTrends: The Way We Work

The way we work is changing — fast. Where we work, who we work with and how we get work done is all evolving. On this week’s episode we talk to Sarah Travers, CEO of the co-working space Workbar, and to one head of recruiting who thinks remote working and co-working aren’t going anywhere.

The way we work is changing — fast. Where we work, who we work with and how we get work done is all evolving. On this week’s episode we talk to Sarah Travers, CEO of the co-working space Workbar, and to one head of recruiting who thinks remote working and co-working aren’t going anywhere.

Travers is a longtime co-working evangelist. She has spent her entire career selling the idea of co-working, first at IWG (Regus), a global provider of flexible workspace solutions. She joined Workbar in late 2017. She has the unique perspective of witnessing the industry’s explosive growth — as both a seasoned veteran of the world’s largest shared office giant and as the CEO of Boston’s original co-working space.

She shares her thoughts on where the industry is headed and why co-working is so much more than either a physical space or the popular image of a collection of young digital nomads working on computers in a shared space.

Making Connections

Travers says co-working is often defined as a group of individuals working together in a shared communal setting, which evokes the idea of a young digital workers in an open room focusing on their own tasks — a concept she says “couldn’t be further from the truth.” Rather, she says, users often find the co-working atmosphere inspiring and valuable because it offers the opportunity to make connections and work beside people from all different types of businesses and companies.

She says co-working users are also drawn to businesses development opportunities through classes, event programming and networking at new member lunches or happy hours. “There are just a lot of ways to grow your own personal and professional network in this space,” she says. “It just goes beyond that sort of original idea of a bunch of millennials sitting with headphones typing away in one big room.”

Changing Demographics and Needs

Travers says her company’s research clearly debunks the idea that co-working spaces are just for millennials or people in technology. She says Workbar members cut across a number of industries and have an average age of 38 or 39. They are also increasingly employees of large organizations.

“I think that you also hear that only individuals and small teams use co-working space,” she says. “We have seen that Fortune 500 companies often use co-working for not just for remote employees but also for groups as a way to sort of drive innovation outside of a traditional headquarters.”

What’s Driving Growth

Travers says co-working is clearly no longer thought of as just a short-term trend or a solution for people who don’t want to work from their kitchen table or in a coffee shop. She says one factor driving the increasing popularity of co-working spaces is a cultural shift away from merely clocking in and out of work and toward getting more satisfaction and meaning from our jobs.

“There’s a real value proposition behind it that’s been embraced by a larger audience as some of the big players in the industry both on the landlord and the tenant side,” she says. “The landlords have awareness that they need to evolve their offerings more to meet the changing environments. On the flip side, the tenants are more focused on the need to enjoy the experience of the office environment.”

 

Source: Talent Culture

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

A New Robot is Teaching People with Autism to Navigate Office Politics

A New Robot is Teaching People with Autism to Navigate Office Politics

Adults with autism often find it difficult to read subtle emotional cues that other people may take for granted, and teaching them how to recognize those signals can be a challenge. Researchers at Scotland’s Heriot-Watt University say they’ve invented a solution: Alyx, a robotic emotion teacher.

Adults with autism often find it difficult to read subtle emotional cues that other people may take for granted, and teaching them how to recognize those signals can be a challenge. Researchers at Scotland’s Heriot-Watt University say they’ve invented a solution: Alyx, a robotic emotion teacher.

Alyx was built to address a particular problem: In the US and the UK, more than 80% of autistic adults are unemployed. “And the main issue is not that they can’t do the work,” Thusha Rajendran, one of Alyx’s creators, says. “It’s the workplace politics, especially being able to understand what people really mean, rather than simply what they say. And part of that is understanding emotional expression.”

Alyx’s face is simple, with very few features: humanoid, Rajendran explains, but not human-like. And that’s on purpose; human faces generate lots of small extraneous signals that people with autism can find difficult to decode. By contrast, Alyx’s basic, easily controllable robotic face makes it an ideal teacher of social cues.

In a training session with Alyx, a user would perform a clerical task, like filing paper, and Alyx would respond with a sign of approval or disapproval. Alyx’s creators say this is the main hurdle that adults with autism need help getting over: Knowing whether or not they’re doing a good job. Watch the video above to see how it works.

Source: Quartz

Artificial Intelligence has a Strange New Muse: Our Sense of Smell

Artificial Intelligence has a Strange New Muse: Our Sense of Smell

Today’s artificial intelligence systems, including the artificial neural networks broadly inspired by the neuronsand connections of the nervous system, perform wonderfully at tasks with known constraints. 

Today’s artificial intelligence systems, including the artificial neural networks broadly inspired by the neuronsand connections of the nervous system, perform wonderfully at tasks with known constraints. They also tend to require a lot of computational power and vast quantities of training data. That all serves to make them great at playing chess or Go, at detecting if there’s a car in an image, at differentiating between depictions of cats and dogs. “But they are rather pathetic at composing music or writing short stories,” said Konrad Kording, a computational neuroscientist at the University of Pennsylvania. “They have great trouble reasoning meaningfully in the world.”

To overcome those limitations, some research groups are turning back to the brain for fresh ideas. But a handful of them are choosing what may at first seem like an unlikely starting point: the sense of smell, or olfaction. Scientists trying to gain a better understanding of how organisms process chemical information have uncovered coding strategies that seem especially relevant to problems in AI. Moreover, olfactory circuits bear striking similarities to more complex brain regions that have been of interest in the quest to build better machines.

Computer scientists are now beginning to probe those findings in machine learning contexts.

Flukes and Revolutions

State-of-the-art machine learning techniques used today were built at least in part to mimic the structure of the visual system, which is based on the hierarchical extraction of information. When the visual cortex receives sensory data, it first picks out small, well-defined features: edges, textures, colors, which involves spatial mapping. The neuroscientists David Hubel and Torsten Wiesel discovered in the 1950s and ’60s that specific neurons in the visual system correspond to the equivalent of specific pixel locations in the retina, a finding for which they won a Nobel Prize.

As visual information gets passed along through layers of cortical neurons, details about edges and textures and colors come together to form increasingly abstract representations of the input: that the object is a human face, and that the identity of the face is Jane, for example. Every layer in the network helps the organism achieve that goal.

Deep neural networks were built to work in a similarly hierarchical way, leading to a revolution in machine learning and AI research. To teach these nets to recognize objects like faces, they are fed thousands of sample images. The system strengthens or weakens the connections between its artificial neurons to more accurately determine that a given collection of pixels forms the more abstract pattern of a face. With enough samples, it can recognize faces in new images and in contexts it hasn’t seen before.

Researchers have had great success with these networks, not just in image classification but also in speech recognition, language translation and other machine learning applications. Still, “I like to think of deep nets as freight trains,” said Charles Delahunt, a researcher at the Computational Neuroscience Center at the University of Washington. “They’re very powerful, so long as you’ve got reasonably flat ground, where you can lay down tracks and have a huge infrastructure. But we know biological systems don’t need all that — that they can handle difficult problems that deep nets can’t right now.”

Take a hot topic in AI: self-driving cars. As a car navigates a new environment in real time — an environment that’s constantly changing, that’s full of noise and ambiguity — deep learning techniques inspired by the visual system might fall short. Perhaps methods based loosely on vision, then, aren’t the right way to go. That vision was such a dominant source of insight at all was partly incidental, “a historical fluke,” said Adam Marblestone, a biophysicist at the Massachusetts Institute of Technology. It was the system that scientists understood best, with clear applications to image-based machine learning tasks.

But “every type of stimulus doesn’t get processed in the same way,” said Saket Navlakha, a computer scientist at the Salk Institute for Biological Studies in California. “Vision and olfaction are very different types of signals, for example. … So there may be different strategies to deal with different types of data. I think there could be a lot more lessons beyond studying how the visual system works.”

He and others are beginning to show that the olfactory circuits of insects may hold some of those lessons. Olfaction research didn’t take off until the 1990s, when the biologists Linda Buck and Richard Axel, both at Columbia University at the time, discovered the genes for odor receptors. Since then, however, the olfactory system has become particularly well characterized, and it’s something that can be studied easily in flies and other insects. It’s tractable in a way that visual systems are not for studying general computational challenges, some scientists argue.

“We work on olfaction because it’s a finite system that you can characterize relatively completely,” Delahunt said. “You’ve got a fighting chance.”

“People can already do such fantastic stuff with vision,” added Michael Schmuker, a computational neuroscientist at the University of Hertfordshire in England. “Maybe we can do fantastic stuff with olfaction, too.”

Random and Sparse Networks

Olfaction differs from vision on many fronts. Smells are unstructured. They don’t have edges; they’re not objects that can be grouped in space. They’re mixtures of varying compositions and concentrations, and they’re difficult to categorize as similar to or different from one another. It’s therefore not always clear which features should get attention.

These odors are analyzed by a shallow, three-layer network that’s considerably less complex than the visual cortex. Neurons in olfactory areas randomly sample the entire receptor space, not specific regions in a hierarchy. They employ what Charles Stevens, a neurobiologist at the Salk Institute, calls an “antimap.” In a mapped system like the visual cortex, the position of a neuron reveals something about the type of information it carries. But in the antimap of the olfactory cortex, that’s not the case. Instead, information is distributed throughout the system, and reading that data involves sampling from some minimum number of neurons. An antimap is achieved through what’s known as a sparse representation of information in a higher dimensional space.

Take the olfactory circuit of the fruit fly: 50 projection neurons receive input from receptors that are each sensitive to different molecules. A single odor will excite many different neurons, and each neuron represents a variety of odors. It’s a mess of information, of overlapped representations, that is at this point represented in a 50-dimensional space. The information is then randomly projected to 2,000 so-called Kenyon cells, which encode particular scents. (In mammals, cells in what’s known as the piriform cortex handle this.) That constitutes a 40-fold expansion in dimension, which makes it easier to distinguish odors by the patterns of neural responses.

“Let’s say you have 1,000 people and you stuff them into a room and try to organize them by hobby,” Navlakha said. “Sure, in this crowded space, you might be able to find some way to structure these people into their groups. But now, say you spread them out on a football field. You have all this extra space to play around with and structure your data.”

Once the fly’s olfactory circuit has done that, it needs to figure out a way to identify distinct odors with non-overlapping neurons. It does this by “sparsifying” the data. Only around 100 of the 2,000 Kenyon cells — 5 percent — are highly active in response to given smells (less active cells are silenced), providing each with a unique tag.

In short, while traditional deep networks (again taking their cues from the visual system) constantly change the strength of their connections as they “learn,” the olfactory system generally does not seem to train itself by adjusting the connections between its projection neurons and Kenyon cells.

As researchers studied olfaction in the early 2000s, they developed algorithms to determine how random embedding and sparsity in higher dimensions helped computational efficiency. One pair of scientists, Thomas Nowotny of the University of Sussex in England and Ramón Huerta of the University of California, San Diego, even drew connections to another type of machine learning model, called a support vector machine. They argued that the ways both the natural and artificial systems processed information, using random organization and dimensionality expansion to represent complex data efficiently, were formally equivalent. AI and evolution had converged, independently, on the same solution.

Intrigued by that connection, Nowotny and his colleagues continue to explore the interface between olfaction and machine learning, looking for a deeper link between the two. In 2009, they showed that an olfactory model based on insects, initially created to recognize odors, could also recognize handwritten digits. Moreover, removing the majority of its neurons — to mimic how brain cells die and aren’t replaced — did not affect its performance too much. “Parts of the system might go down, but the system as a whole would keep working,” Nowotny said. He foresees implementing that type of hardware in something like a Mars rover, which has to operate under harsh conditions.

But for a while, not much work was done to follow up on those findings — that is until very recently, when some scientists began revisiting the biological structure of olfaction for insights into how to improve more specific machine learning problems.

Hard-Wired Knowledge and Fast Learning

Delahunt and his colleagues have repeated the same kind of experiment Nowotny conducted, using the moth olfactory system as a foundation and comparing it to traditional machine learning models. Given fewer than 20 samples, the moth-based model recognized handwritten digits better, but when provided with more training data, the other models proved much stronger and more accurate. “Machine learning methods are good at giving very precise classifiers, given tons of data, whereas the insect model is very good at doing a rough classification very rapidly,” Delahunt said.

Olfaction seems to work better when it comes to speed of learning because, in that case, “learning” is no longer about seeking out features and representations that are optimal for the particular task at hand. Instead, it’s reduced to recognizing which of a slew of random features are useful and which are not. “If you can train with just one click, that would be much more beautiful, right?” said Fei Peng, a biologist at Southern Medical University in China.

In effect, the olfaction strategy is almost like baking some basic, primitive concepts into the model, much like a general understanding of the world is seemingly hard-wired into our brains. The structure itself is then capable of some simple, innate tasks without instruction.

One of the most striking examples of this came out of Navlakha’s lab last year. He, along with Stevens and Sanjoy Dasgupta, a computer scientist at the University of California, San Diego, wanted to find an olfaction-inspired way to perform searches on the basis of similarity. Just as YouTube can generate a sidebar list of videos for users based on what they’re currently watching, organisms must be able to make quick, accurate comparisons when identifying odors. A fly might learn early on that it should approach the smell of a ripe banana and avoid the smell of vinegar, but its environment is complex and full of noise — it’s never going to experience the exact same odor again. When it detects a new smell, then, the fly needs to figure out which previously experienced odors the scent most resembles, so that it can recall the appropriate behavioral response to apply.

Navlakha created an olfactory-based similarity search algorithm and applied it to data sets of images. He and his team found that their algorithm performed better than, and sometimes two to three times as well as, traditional nonbiological methods involving dimensionality reduction alone. (In these more standard techniques, objects were compared by focusing on a few basic features, or dimensions.) The fly-based approach also “used about an order of magnitude less computation to get similar levels of accuracy,” Navlakha said. “So it either won in cost or in performance.”

Nowotny, Navlakha and Delahunt showed that an essentially untrained network could already be useful for classification computations and similar tasks. Building in such an encoding scheme leaves the system poised to make subsequent learning easier. It could be used in tasks that involve navigation or memory, for instance — situations in which changing conditions (say, obstructed paths) might not leave the system with much time to learn or many examples to learn from.

Peng and his colleagues have started research on just that, creating an ant olfactory model to make decisions about how to navigate a familiar route from a series of overlapped images.

In work currently under review, Navlakha has applied a similar olfaction-based method for novelty detection, the recognition of something as new even after having been exposed to thousands of similar objects in the past.

And Nowotny is examining how the olfactory system processes mixtures. He’s already seeing possibilities for applications to other machine learning challenges. For instance, organisms perceive some odors as a single scent and others as a mix: A person might take in dozens of chemicals and know she’s smelled a rose, or she might sense the same number of chemicals from a nearby bakery and differentiate between coffee and croissants. Nowotny and his team have found that separable odors aren’t perceived at the same time; rather, the coffee and croissant odors are processed very rapidly in alternation.

That insight could be useful for artificial intelligence, too. The cocktail party problem, for example, refers to how difficult it is to separate numerous conversations in a noisy setting. Given several speakers in a room, an AI might solve this problem by cutting the sound signals into very small time windows. If the system recognized sound coming from one speaker, it could try to suppress inputs from the others. By alternating like that, the network could disentangle the conversations.

Enter the Insect Cyborgs

In a paper posted last month on the scientific preprint site arxiv.org, Delahunt and his University of Washington colleague J. Nathan Kutz took this kind of research one step further by creating what they call “insect cyborgs.” They used the outputs of their moth-based model as the inputs of a machine learning algorithm, and saw improvements in the system’s ability to classify images. “It gives the machine learning algorithm much stronger material to work with,” Delahunt said. “Some different kind of structure is being pulled out by the moth brain, and having that different kind of structure helps the machine learning algorithm.”

Some researchers now hope to also use studies in olfaction to figure out how multiple forms of learning can be coordinated in deeper networks. “But right now, we’ve covered only a little bit of that,” Peng said. “I’m not quite sure how to improve deep learning systems at the moment.”

One place to start could lie not only in implementing olfaction-based architecture but also in figuring out how to define the system’s inputs. In a paper just published in Science Advances, a team led by Tatyana Sharpee of the Salk Institute sought a way to describe smells. Images are more or less similar depending on the distances between their pixels in a kind of “visual space.” But that kind of distance doesn’t apply to olfaction. Nor can structural correlations provide a reliable bearing: Odors with similar chemical structures can be perceived as very different, and odors with very different chemical structures can be perceived as similar.

Sharpee and her colleagues instead defined odor molecules in terms of how often they’re found together in nature (for the purposes of their study, they examined how frequently molecules co-occurred in samples of various fruits and other substances). They then created a map by placing odor molecules closer together if they tended to co-activate, and farther apart if they did so more rarely. They found that just as cities map onto a sphere (the Earth), the odor molecules map onto a hyperbolic space, a sphere with negative curvature that looks like a saddle.

Sharpee speculated that feeding inputs with hyperbolic structure into machine learning algorithms could help with the classification of less-structured objects. “There’s a starting assumption in deep learning that the inputs should be done in a Euclidean metric,” she said. “I would argue that one could try changing that metric to a hyperbolic one.” Perhaps such a structure could further optimize deep learning systems.

A Common Denominator

Right now, much of this remains theoretical. The work by Navlakha and Delahunt needs to be scaled up to much more difficult machine learning problems to determine whether olfaction-inspired models stand to make a difference. “This is all still emerging, I think,” Nowotny said. “We’ll see how far it will go.”

What gives researchers hope is the striking resemblance the olfactory system’s structure bears to other regions of the brain across many species, particularly the hippocampus, which is implicated in memory and navigation, and the cerebellum, which is responsible for motor control. Olfaction is an ancient system dating back to chemosensation in bacteria, and is used in some form by all organisms to explore their environments.

“It seems to be closer to the evolutionary origin point of all the things we’d call cortex in general,” Marblestone said. Olfaction might provide a common denominator for learning. “The system gives us a really conserved architecture, one that’s used for a variety of things across a variety of organisms,” said Ashok Litwin-Kumar, a neuroscientist at Columbia. “There must be something fundamental there that’s good for learning.”

The olfactory circuit could act as a gateway to understanding the more complicated learning algorithms and computations used by the hippocampus and cerebellum — and to figuring out how to apply such insights to AI. Researchers have already begun turning to cognitive processes like attention and various forms of memory, in hopes that they might offer ways to improve current machine learning architectures and mechanisms. But olfaction might offer a simpler way to start forging those connections. “It’s an interesting nexus point,” Marblestone said. “An entry point into thinking about next-generation neural nets.”

Source: Wired

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

A History of Artificial Intelligence in Film

A History of Artificial Intelligence in Film

For nearly 100 years, the film industry has presented us with a number of onscreen representations of artificial intelligence

For nearly 100 years, the film industry has presented us with a number of onscreen representations of artificial intelligence. These AI characters vary from big to small, from good to evil, and from anthropomorphic to clearly robotic. You’re probably well-acquainted with beloved popular AI figures like R2-D2 and WALL-E, but are you familiar with the first robot protagonist or the first-ever instance of AI on the silver screen? For all you movie and tech buffs out there, here’s a look back at the cinematic inclusion of AI technology from the early days of film to present-day.

Source: Enlightened

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Why Soft Skills are Important

Why Soft Skills are Important

Soft skills are some of the most difficult competencies for people to understand. Job seekers and hiring managers alike can struggle with the challenge of defining, demonstrating, and recognizing soft skills. Though they're extremely fluid and highly personalized to each individual, soft skills are a critical component for professional success — and are often the most distinguishing factor between applicants, so make sure you show off your soft skills right.

Soft skills to list on your resume

Soft skills are some of the most difficult competencies for people to understand. Job seekers and hiring managers alike can struggle with the challenge of defining, demonstrating, and recognizing soft skills. Though they're extremely fluid and highly personalized to each individual, soft skills are a critical component for professional success — and are often the most distinguishing factor between applicants, so make sure you show off your soft skills right.

Understanding Hard vs. Soft Skills

The term "soft skills" is often difficult to understand. As the name suggests, these skills aren't as solid and clear-cut as others. Soft skills are also referred to as transferable skills, interpersonal skills, or social skills. Soft skills may include nearly any ability that pertains to the way you approach others or handle your professional life. Soft skills are difficult to measure. There aren't many tests or professional certifications that will demonstrate your proficiencies in these areas.

Hard skills, in contrast, are those skills that are very easily measured and defined. This includes things like accounting, computer programming, plumbing, or dentistry. You can easily obtain a degree or professional certification in these areas. They're very teachable, and almost always attainable if you have the means to pursue a formal education in that area.

Hard skills apply to very specific professions. Web design skills aren't applicable to a career as a surgeon. A nursing education is irrelevant if you're looking for a job as an electrician. Hard skills lock you into a particular occupation.

On the other hand, soft skills are more flexible and can serve you well in numerous occupations. Though it takes more effort and creativity to properly demonstrate these abilities, they're valuable to almost any job that you might pursue.

Professionalism

Professionalism is a soft skill that will set you up for success in any field. It acts as the driving force that pushes you to advance in your career. Some key skills that demonstrate your professionalism are self-motivationwork ethic, and resilience. Employees who are very professional are continuously working to improve themselves and their job performance. They're skilled in time management and organization. They also possess the skills needed to overcome common challenges, such as patience and stress management.

Some accomplishments that demonstrate your professionalism include:

  • Consistently finishing projects ahead of schedule
  • Exceeding the projections for a campaign
  • Demonstrating attention to detail and catching minute errors early in the production process
  • Taking the initiative to go above and beyond what was assigned

Interpersonal Skills

Interpersonal skills are another important subset of your soft skills. These skills pertain to how you relate to others, both inside and outside the company. With your co-workers, teamwork and mentoring skills are valuable. When you're interacting with customers, it's important to demonstrate perceptiveness and empathy, which will help you understand and resolve their issues.

Demonstrating strong listening skillsemotional intelligence, and communication skills will serve you well no matter who you're working with. Those who are good at networking are a valuable asset to the company as well.

You can demonstrate your interpersonal skills by:

  • Building strong, ongoing relationships with customers
  • Working collaboratively with your co-workers
  • Leading seminars or providing effective training
  • Maintaining an extensive network of important contacts including vendors, clients, and partners

Leadership and Management Skills

While leadership skills are most relevant to those in a business management position, don't think that you have to be at the top of the pack to showcase these soft skills. Demonstrating that you're an effective leader will serve you well in any industry or position. If a hiring manager spots leadership potential, they may keep you at the top of the file for future promotions.

Management competencies are typically considered soft skills because they're so difficult to measure. Good managers are skilled with problem solving and project management. They're usually good at performing essential research and analytics. Strong leaders also know how to handle interpersonal issues that arise with those around them. They have critical observationskills that help them identify problems as well as conflict resolution skills to help them skillfully mediate disagreements.

Some accomplishments that will showcase your leadership and management skills include:

  • Successfully heading a major project with several others on your team
  • Skillfully delegating responsibilities to others
  • Identifying difficult problems and implementing innovative solutions with measurable results
  • Overseeing sales and marketing campaigns

Including Soft Skills on Your Resume

It's more difficult to feature soft skills on a resume than it is to highlight your hard skills. However, soft skills are just as important to potential employers. While all the applicants for a marketing position are likely to have college degrees in marketing, not all of them will have the same set of soft skills to bring to the job. This is truly where you can distinguish yourself from the competition.

Don't simply list off your soft skills without providing some measure of proof to back up your statements. Anyone can say that they have strong communication skills. Demonstrate yours by highlighting projects that required you communicate effectively with a diverse group of people. With soft skills, it's more important to show than it is to tell. Include measurable details wherever possible.

  • How many new clients did you land with your networking skills?
  • How much did you improve productivity with your problem-solving talents?

While training for soft skills is more difficult to come by, it does exist in some cases. If you've attended a workshop or seminar to help you develop a soft skill, don't hesitate to feature this on your resume. Not only will it demonstrate your expertise in that area, it will show that you recognize the importance of oft-overlooked skill sets and have dedicated yourself to making improvements in these areas.

Your soft skills can make the difference between a lackluster interview and one that lands you the job. Make sure you take the time to identify your strengths in these areas so you can shine a bright spotlight on the soft skills that make you stand out the most.

Source: CareerBuilder

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

​The Truth About Lying on Resumes

​The Truth About Lying on Resumes

Found a job you’d love to have, but don’t meet all the requirements? It might be tempting to exaggerate your skills or take some liberties on your achievements, but don’t think employers won’t notice. 

Found a job you’d love to have, but don’t meet all the requirements? It might be tempting to exaggerate your skills or take some liberties on your achievements, but don’t think employers won’t notice. 

 

A new CareerBuilder survey found that 75 percent of human resource managers (those who typically review resumes before passing on to a hiring manager) have caught a lie on a resume.

While lying can ultimately come back to bite you, it’s easy to understand why some job seekers are willing to take that risk. They have very little time to grab the attention of the people reviewing their resumes. According to the survey, 2 in 5 hiring managers spend less than a minute looking at a resume, and 1 in 4 spend less than 30 seconds.

But it’s not only lies that are holding job seekers back. When asked to recall the most memorable blunders job applicants have made, one HR manager said to have gotten a resume that was only one sentence. Another one had an applicant who listed the same employment dates for every job listed, and yet another had an applicant who listed their extensive arrest history.

Are you sabotaging yourself? 7 resume mistakes you need to stop making

Hiring managers were also asked about the most common mistakes they see applicants make on their resumes, and which ones are instant deal breakers.

  1. You don’t proofread. The overwhelming majority (77 percent) of hiring managers say they instantly disqualify resumes with typos or bad grammar. Give your resume a once-over or ask a peer to review it before sending it in.
  2. Your email address is burpmaster69@hotmail.com. An unprofessional email address is a turnoff for 35 percent of employers. For the sake of your job search, it’s probably time to retire that email address you’ve had since 7th grade.
  3. Your resume lacks results. Thirty-four percent of hiring managers want to see quantifiable results on a resume. For example, did your efforts help increase sales revenue? Win over new clients? Increase page views or open rates? Consider your various professional achievements and think of ways you can attach numbers to them.
  4. Your resume is an eyesore. 25 percent of hiring managers won’t even bother with your resume if it’s just long paragraphs of text. Make your resume easier to read by breaking it into sections with bold headlines (education, work history, etc.) and use bullets to break up the text.
  5. You don’t customize your resume. A generic resume is an immediate contender for the no pile for 18 percent of hiring managers. If you want to be seen, customize your resume to the specific job for which you’re applying.
  6. You include TMI. A resume that’s more than two pages is far too long in the eyes of 17 percent of hiring managers. Try to keep your resume to one page by including only the information that pertains to the job at hand (see above).
  7. You don’t include a cover letter. If your resume doesn’t come with a cover letter, 1 in 10 hiring managers won’t even bother to read it.

Download the free guide to creating the perfect resume

Source: CareerBuilder

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

10 Bad Habits That Make You Look Immature at Work

10 Bad Habits That Make You Look Immature at Work

When you're a kid, you don't yet have the tools that help process actions and your emotions – tools like maturity, patience or looking at the context of a situation. By the time you reach adulthood, you should have a better grasp on what’s appropriate and inappropriate behavior.

When you're a kid, you don't yet have the tools that help process actions and your emotions – tools like maturity, patience or looking at the context of a situation. By the time you reach adulthood, you should have a better grasp on what’s appropriate and inappropriate behavior.

Unfortunately, it seems many of us don’t wanna grow up – or at least are still having some trouble mastering skills like maturity and patience. According to CareerBuilder research, about 3 in 4 employees have witnessed some type of childish behavior among colleagues in the workplace, including:

  • Whining: 55 percent
  • Pouting over something that didn't go his/her way: 46 percent
  • Tattling on another co-worker: 44 percent
  • Making a face behind someone's back: 35 percent

Now, we’re all human. We all do something a little immature every now and again. But if childish behaviors go on long enough to become habits, they could be a serious risk to your career. Some habits to avoid include:

1. Not respecting common areas

Shared kitchens or bathrooms at the office are great, and you should feel comfortable taking advantage of them when need be. But always keep in mind that they’re not there just for you alone.

“No one wants to be branded as the person who leaves rancid food in the fridge for weeks, or who takes the carpool spot after driving in alone or who is always late to a meeting and holds up the team,” says Darchelle Nass, Senior Vice President, Human Resources and Administrative at Addison Group.

If you want to avoid earning an unfavorable reputation, Nass suggests doing a little planning. “In the common spaces, respect the rules of the road. If you have trouble remembering to bring food items home, set a task to remind you at the end of the day each week. Plan to arrive a few minutes earlier in the morning and consider co-workers’ schedules and time as much as your own. Being prompt and respectful of co-workers' times will land you an edge up.”

2. Being unhelpful

One of the most central factors in an individual’s perceived maturity is their ability to see things from other people’s points of view. If you’re not willing to go above and beyond to help your teammates, not only are you keeping your team from achieving its potential – you’re also showcasing your own immaturity.

“One of the most common bad habits I see in the workplace is a ‘not my problem’ attitude. People with this attitude shirk responsibilities outside of their specific assignments and place their own goals above others', including their teams’ and even their organizations’. They aren’t team players and help others only when it clearly benefits themselves as well,” says Christopher K. Lee, founder and career consultant at PurposeRedeemed. “It's easy to see how this type of behavior won't win many friends. These individuals are seen as self-centered, short-sighted, unhelpful and inconsiderate.”

3. Blaming others

Everyone makes mistakes, and you’re likely to make a few throughout your career. When something goes wrong or doesn’t quite pan out as expected, you may feel tempted to point the finger in someone else’s direction. That’s a bad idea.

“This is a quick way to burn bridges. People will think you cannot be trusted and will avoid giving you work. No one will ask you for a favor if they think you'll turn on them. The workplace is about supporting each other, and blaming others is the antithesis of that,” says Jason Patel, former career ambassador at the George Washington University and the founder of Transizion, a college and career prep company that is focused on closing the opportunity divide in America.

4. Not being prepared in meetings

Nobody likes meetings – particularly unproductive meetings. If you show up for a meeting without taking some time in advance to prepare, you slow down the process and earn the ire of everyone present.

“Often new employees will arrive to a meeting with no intention of walking away with actionable items for themselves or others,” says Scott Fish, founder of 32° Digital Marketing. “If you are running the meeting, set the expectation that people should come prepared to provide input, delegate and recognize their own valuable input that can be made on a project.”

5. Gossiping

Great teams are built on trust and respect, and there are few ways to erode that foundation more quickly than by spreading rumors and talking negatively about co-workers behind their back.

“People love to talk in the workspace because it makes the day go by quicker, but if those conversations veer toward the gossip side it will crater your perception in the workplace. There is a big difference between being the friendly co-worker who is always good for a quick chat and the sneaky troublemaker constantly spreading rumors,” says Justin Hussong, the founder of Heat Checks, a new sports/travel publication.

“People will eventually catch on, and your words will find their way back to you. If you care about your job and want to get ahead, you don’t want to give the impression that you’re trying to cut others down for your own benefit. It suggests that you’re not a team player and is never a good idea.”

In some cases, a little immaturity can be a harmless way to let off some steam, and can even help co-workers bond. Maintaining your “inner child” is generally considered a good thing – but that doesn’t mean you should let your inner child take the wheel, especially when the result can be damaging to your career.

Source: CareerBuilder

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

How to Leverage Alumni Networking on LinkedIn To Find a Job

How to Leverage Alumni Networking on LinkedIn To Find a Job

If you are frustrated searching job boards, sending resumes into black holes, and not getting responses from prospective employees, what I call “The Alumni Networking Solution” will help you.

If you are frustrated searching job boards, sending resumes into black holes, and not getting responses from prospective employees, what I call “The Alumni Networking Solution” will help you.

Research from Jobvite found that “Employees hired through referral are hired 55% faster than those who come from a career site.”

I have personally used The Alumni Networking Solution to find leads and get interviews that lead to job offers.

What exactly is The Alumni Networking Solution?

The Alumni Networking Solution is a 5-step networking system designed to introduce yourself to college alumni and develop relationships that lead to referrals. This means actually getting to know the person- and asking for his or her advice, instead of a job like everyone else.

I used these five simple steps – in about 10 minutes per connection – to find a job in the worst job market in the last eighty years.

Step 1: Update Your Career Materials

Make sure you have your resume and LinkedIn profile updated. Those are the two major career materials your alumni are going to want to see before they agree help you. The top three issues you want to avoid are:

  • An unprofessional LinkedIn profile picture: This will hurt your chances of having people even view your Linked profile
  • Having the usual generic headline: Every college grad has “Recent Major Looking For Entry Level Position”
  • A resume that has typos: Typos make you look sloppy and unpolished

If you need further guidance, read these networking event tips.

Step 2: Join Your Alumni Group on LinkedIn

Alumni are always willing to lend a helping because you have a lot in common: you stayed in the same dorms, had the same professors, and drank at the same local bar. Most importantly they remember how hard it was to get their career started.

  • Change the search setting found in the upper right hand corner of your home page to Groups
  • Enter the name of the college you attended and your alumni group should show up in the search results
  • Select the group
  • Click “Join Group”

An email confirmation will be sent to you confirming your membership to the group; once you receive that message you will be ready to network!

Step 3: Create an Introduction Letter

An engaging message is your first – and perhaps only – first impression! For consistency and simplicity reasons, consider customizing the message below:

Subject: Hello! A quick question from fellow alumni

Dear <First Name>,

I graduated from <Your College> in <Year Graduated> with a degree in <Your Degree/Major>. I see that you work in <Industry> and was wondering if you would be willing chat on the phone, at your convenience of course.

I would love to hear more about what you do and any insights or advice you might have on breaking into the industry.

Any help would be extremely appreciated!

Thanks,

Name
Email
Cell Number

Important: the purpose of this letter is NOT to ask for a job. “Hi! It’s nice to meet you. Want to hire me?” is not effective networking. Be discrete. Be patient!

Step 4: Send!

After carefully customizing, send your pitch email to members in your alumni group who are working in an industry of interest to you.

  • In the LinkedIn group, click the members tab
  • Find alumni who work in the industry in which you want to get a job as well as alumni in the city where you would like to work
  • When you find a good fit, send

It’s important to be flexible and consider reaching out to alumni who work for companies you would like to work at. Even if they don’t work in the same field that you are looking to get into they will probably know someone at the company they can introduce you to.

Step 5: Set up Informational Interviews

As replies begin to roll in, set up phone meetings, Skype calls and face-to-face meetings. These informational interviews will enable you to connect with the alumni – and allow you to demonstrate your passion to an influencer in your industry of choice.

Through the relationships you develop, you should soon start receiving leads for open positions… many of which aren’t even advertised!

Remember that networking IS NOT asking someone for a job. Networking is about building your professional network that could lead to referrals.

Now Take Action

Use the LinkedIn Alumni Solution and send 5-10 customized LinkedIn message to your alumni. I guarantee you at least 2-3 people will respond willing to help you with your job search.

Share this with any friends/young professionals you know who are struggling to find a job or internship.  It will take you less then a minute and could really make a difference to their career success.

The Advantages of Managing Yourself

The Advantages of Managing Yourself

The business environment is challenging, with demands placed on you each and every day, presenting themselves in myriad forms. Whether it’s negotiating with clients, trying to meet hectic deadlines, managing the work-life balance, or handling conflict with colleagues, there will always be something that will test your patience and your resolve. Most importantly, though, there will always be opportunity for growth – professionally and personally. However, in order for this growth to transpire out of these challenges, a change of your inner narrative is required.

The business environment is challenging, with demands placed on you each and every day, presenting themselves in myriad forms. Whether it’s negotiating with clients, trying to meet hectic deadlines, managing the work-life balance, or handling conflict with colleagues, there will always be something that will test your patience and your resolve. Most importantly, though, there will always be opportunity for growth – professionally and personally. However, in order for this growth to transpire out of these challenges, a change of your inner narrative is required.

Tips on managing yourself (and growing yourself)

Identify your role in creating or contributing towards the ‘problem’

What can you do to improve the situation, or how can you approach it the next time it occurs to ensure a better outcome?

Don’t resist development opportunities

If there is one thing that human beings fear, it’s change. We seem to have an aversion to anything that pulls us out of our comfort zone. However, for self-growth, an ability to adapt to, and welcome, change is absolutely vital. So, when it comes to a new way of doing things, whether it be new business processes or a change in team structure, focus on the positives. Think about the ways that the situation can benefit you, and uncover the ways that you can contribute to your own self growth by welcoming a shift in perspective.

Self-awareness is critical

This can be achieved by encouraging feedback from colleagues/managers so that you can gauge how others view you in the professional environment. Combining this with an accurate assessment of yourself is key to developing self-awareness. Being conscious of your own self biases and striving for more objectivity towards how you view yourself are fundamental aspects in paving the way to true self-awareness and fundamentally, growth.

Put yourself to the test

Whether you are a professional who is happily employed, are looking to branch out, or have just started the job search, making use of self-management tools is key to ensuring your career growth and success. Online job search engines like Zigo open up a portal of impressive job opportunities, where you can find a position in an industry that inspires you, and where you can identify the environment that will facilitate and encourage your development and ultimately, contribute to the set of mental skills required for you to be at the top of your game, in everything you do.

 

Source: CareerRocketeer

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Stanford researchers create new AI-powered camera for faster image processing

Stanford researchers create new AI-powered camera for faster image processing

Researchers from Stanford University have created a new artificial intelligence (AI) powered camera system capable of processing images in a faster, more efficient way, and holds a promising future for being applied to self-driving vehicles or security cameras. 

Researchers from Stanford University have created a new artificial intelligence (AI) powered camera system capable of processing images in a faster, more efficient way, and holds a promising future for being applied to self-driving vehicles or security cameras. 

The breakthrough was published in the science journal Nature on Friday. 

A research team led by Gordon Wetzstein, an assistant professor of electrical engineering at Stanford, combined two types of computers into one hybrid optical-electrical computer designed specifically for image analysis. 

The AI-powered camera system includes an optical computer in the first layer, which does not perform digital computing that requires power-intensive mathematics algorithms, while the second layer is a traditional digital electronic computer. 

The optical computer is responsible for physical pre-processing of image data involving multiple ways of filtering, which requires zero input power, because the filtering happens naturally as light passes through the custom optics. 

The new approach of image processing saves a lot of time and energy for the hybrid system that would otherwise be consumed by mathematical computing. 

With these pre-processing steps, the digital computer layer can start the remaining analysis immediately. 

Millions of calculations are circumvented and it all happens at the speed of light, Wetzstein said. 

The prototype camera system proves operational in both simulations and real-world experiments after it was successfully used to identify airplanes, automobiles, cats, dogs and more within natural image settings, with speed and accuracy outperforming existing electronic-only computing processors. 

The researchers said their next step is to miniaturize the system, now a prototype arranged on a lab bench, to fit it in a hand-held video camera or a drone.

Source: GlobalTimes

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Why Artificial Intelligence Is the Evolution of User Experience

Why Artificial Intelligence Is the Evolution of User Experience

Successful companies address customer needs better than the competition. However before you can address customer needs, you must first identify them.

Successful companies address customer needs better than the competition. However before you can address customer needs, you must first identify them.

“What are my customers’ needs? And how can I address them better than the competition?” are the ultimate strategic questions every CEO should ask daily. A close attention to user experience (UX) will help answer the first question, and a clever use of artificial intelligence (AI) will help answer the second.

Today, there are no excuse not to invest into understanding your customer needs. I’m not talking about paying a few bucks to some random study groups. I’m talking about investing in actionable data points — the ones that are in immediate contact with your customers.

If you can understand exactly why someone is not completing a key-step in your conversion process, then you have identified your customers’ unmet needs and can proceed to solving them. Understanding this user behavior is the craft of UX — defined as “the study of the experience of consumers using a product, system, or service.”

Gathering Insights

I facetiously mention study groups because, too often, business owners, entrepreneurs and “wantrepreneurs” will conduct organized “market research” sessions to give themselves the illusion that they are steering their strategy in the right direction. What is more valuable, however, is understanding exactly how people are physically interacting with your product, system, or service in a completely unbiased setting.

To get these insights, you need to set up a data structure to record and analyze this information. For example, in eCommerce, you need to track the number of users who clicked the “Add to Cart” button and at what rate. In publishing, you need to track how many people are reading the entirety of your articles and which topics are most popular. There is an infinite amount of good data points to capture, as long as it helps you better understand your customers’ needs.

With enough information, you can create computer systems and algorithms to adapt your solutions to individual customers, at mass scale. This is AI. With enough data, companies can use AI to remove painful decision-making from their customers, reduce friction in customer journeys, and ultimately deliver a better user experience.

Apple vs. Spotify

Consider the example of Spotify and Apple iTunes. Before Apple Music, iTunes knew exactly which songs we listened to and the times at which we were in the mood for funk, rock, R&B or jazz. But management never put that data to good use, failing to provide relevant and timely recommendations to increase user engagement.

Conversely, Spotify was poised to find out what listeners wanted. Spotify realized it was in the entertainment business, not the music cataloging business. It understood that the biggest point of friction in getting the average Joe to consume music is to recommend him songs.

Joe doesn’t want to research, organize, and store music. Joe simply wants to listen to Beyonce’s new track. In removing pain-points and addressing that need, Spotify’s “Discover Weekly” skyrocketed the service into mass adoption.

The UX angle comes first, then comes AI to answer the unmet needs. Interestingly, Pandora shared similar views and insights to Spotify. In spitting out radio playlists – based off of one song or artist – they remove the guesswork of searching for the next song. Unfortunately, Pandora didn’t focus enough on increasing the conversion rate from “User opens Pandora” to “User listens to a song.” Even today, users are forced to use brain-power in determining which station to create.

Focus on the UX, then push your findings by applying AI to solve the problems identified in the UX.

Google’s Approach

Another prolific case is that of Internet giant Google. They won over the search market by focusing on the user experience and de-cluttering the Google homepage and search results page. Google now processes over 40,000 search queries per second.

They maintain their first position in search by betting on the same strategy: making sure that their search experience is the best one out there. With their colossal amount of user data, Google uses AI and deep learning tools to determine which questions you want answered, as you are typing in the search box.

This is labeled as AI, but really, it’s not much more than using accessible data to improve UX. The “guessing” algorithms make perfect business sense too. With more people satisfying their search queries faster, Google keeps users coming back and satisfies advertisers with high-quality impressions, clicks, and conversions.

Amazon’s Destiny

Lastly, let’s consider “Destiny,” Amazon’s tremendous AI tool that pools millions of data point to give impeccably related and recommended products to the user.

First let’s go back and consider Amazon’s business strategy, which manifests itself with their innovations in UX. Amazon’s strategy is twofold: “sell cheap products; deliver them fast.” The overarching theme around their strategy is to make online shopping incredibly easy.

When Amazon patented the “One-click purchase” checkout system, it was created – once again – with the user experience in mind. Amazon had identified friction in the checkout process and created a tool to address it perfectly. Today, the patent is one of Amazon’s biggest advantages over competitors in eCommerce.

Note: Amazon’s one-click purchase patent expires on September 11th, 2017. Soon, Amazon’s breakthrough in UX will become ubiquitous in the eCommerce world, revolutionizing startups and large brands worldwide.

Similarly, Amazon’s product recommendation AI, “Destiny” is born out of this mission to make online shopping easy. To stay ahead of the competition, Amazon remains religiously focused on improving their UX. AI enables Amazon to improve UX for their billions of shoppers, at the individual level.

Marketing + AI / UX

For the scrappy entrepreneurs, the aspiring startups, and the savvy marketers, you don’t need to invest billions of dollars into AI. Though the examples listed above are initiatives taken on by tech’s biggest players, the insights are extremely simple and are just as applicable to any business.

The main action item here is:

Know your customer need, and invest in tracking and analytics.

Tracking systems from Google Analytics, to Mixpanel, Intercom, Heap, or Localytics will either be free or inexpensive. Use these tools to get real insights into the pain-points in user journeys. Once you understand those needs better via a smart tracking system, then offer customized solutions to those needs.

Here are 2 down and dirty examples of this:

If you are a SaaS business, and you notice a large drop-off from one key conversion step to the next. If you have significant data to support this assumption and data on the type of business the consumer is in, use Intercom to popup a personalized message for those users; say something along the lines of “Hey – Are you still figuring out if this is the right solution for you? We’ve worked with a similar XXX client last year, here’s the case study: LINK”.

If you are an eCommerce business that sells ties and cufflinks, and you have data to support that a user has entered your newsletter and never purchased before. That user is coming back 3-4 times a month to check out your tie collection. Send them an email along the lines of “Hey – I see you’ve been eyeing this specific tie. If you were wondering what it looks like on people, here are a few pictures of people wearing it, from Instagram :)”

You can’t solve a need if you can’t identify it. That is the ta-dah point of this post. Invest in data and address the needs in a customized fashion with low barrier tools.

If you think of AI, not as a thing out of iRobot, but instead as a really smart spreadsheet, then you begin to understand how basic uses of data can be used to improve UX and in turn, win over more customers.

Source: Ladder.io

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

lazzarini's hover coupé is a visionary flying car concept

lazzarini's hover coupé is a visionary flying car concept

lazzarini design studio’s ‘hover coupé’ is a flying car concept influenced by iconic italian car brand isotta fraschini. created by italian designer pierpaolo lazzarini, he successfully combines a retro aesthetic with futuristic technology to create a vision of what a future with flying automobiles may be.

lazzarini design studio’s ‘hover coupé’ is a flying car concept influenced by iconic italian car brand isotta fraschini. created by italian designer pierpaolo lazzarini, he successfully combines a retro aesthetic with futuristic technology to create a vision of what a future with flying automobiles may be.

lazzarini design studio believes in thinking about the future while never forgetting the past, which is exemplified in its ‘hover coupé’. the two-person futuristic vehicle measures 4,500 mm in length, making it similar to a compact car for scale. the hover coupé turns by releasing air inside its turbines, while the position of the jet engines also give it drone-like maneuverability. to stabalize the car in flight, the italian studio has fitted it with adjustable flaps below the chassis. according to the studio, these jets could allow the hover coupé to reach speeds exceeding 550 km/h (342 mph). like all flying car’s found in science fiction films, this design exercise will almost certainly never make reality, and will remain only at a concept phase. still, lazzarini’s renderings can serve to inspire manufacturer’s designers to take greater risks, to design truly dramatic and mesmerizing modes of transportation.

Nice in design, theory and concept only. Mainly because any fool or madman can get a driver’s licence now and be a hazard on the road. Imagine a lunatic/drunk driver flying one of these…or the cloud of dirt, dust and metro city filth this would produce. Or the number of shorts, skirts and umbrellas this would turn inside out. And yet, it would make a marvelously obnoxious leaf blower to irritate the neighbors with!

(THE HOVER COUPE from Lazzarini Design on Vimeo.)

Source: Designboom

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

How to Make Your Own Luck at Networking Events

How to Make Your Own Luck at Networking Events

No matter how experienced you are professionally, networking events can be overwhelming. You’re thrown into a room with dozens and dozens of people you don’t know, and you have a limited of time to make an impression. While it’s tempting to stand in the corner and sip your drink nervously, here are some tips on making the most of your time at these types of professional events:

No matter how experienced you are professionally, networking events can be overwhelming. You’re thrown into a room with dozens and dozens of people you don’t know, and you have a limited of time to make an impression. While it’s tempting to stand in the corner and sip your drink nervously, here are some tips on making the most of your time at these types of professional events:

Head into the Event with a Goal in Mind

Do you want to talk to a specific person who you know will be there? Do you want to exchange cards with three people? Perhaps you want to talk with someone who has a management role in your field. Setting goals for yourself ahead of time makes it easier to track your success.

Don’t Bombard People with Requests

Networking events are a great way to get to know other professionals, but they’re not the time to start hounding people with requests. Get to know the other person and focus on building a relationship first. If you introduce yourself and immediately start inundating the individual with requests, you’ll position yourself as someone who’s just there to use others, even if this isn’t really the case.

Listen More Than You Talk

No one wants to get caught listening to someone give a monologue. To be a desirable conversation partner at a networking event, make it a point to ask questions about the other person. What do they like about their job? What are their hobbies? Do they have children? What brought them to the area? Offer up relevant details about yourself as they come up, but don’t spend the whole time going on and on about your own accomplishments.

Listen Closely

The best way to have a conversation with someone you just met is by listening carefully. When they provide an answer to a question, actually listen to what they’re saying and ask follow-up questions based on their response. When you sit there and pepper someone with questions without listening to what they’ve just told you, don’t be shocked when they start looking for a way to politely exit the discussion.

Keep in Contact After the Event

You can have dozens of productive conversations at the event, but if you fall off the face of the planet once you walk out the door then you’ve just wasted your time. In order to build a strong network, stay in touch afterwards. Connect on LinkedIn or Twitter, send an e-mail telling the person it was nice to meet them, and make it a point to meet up again at a later date.

Source: Sparkhire

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Improving Mental Health In The Workplace

Improving Mental Health In The Workplace

Often times when we look at our lifestyle in an attempt to assess our health, we tend to focus on the physical aspects. We ask ourselves questions regarding our diets and amount of exercise, which in an office environment, can be difficult to keep at a healthy level. Maybe because of that we choose to stop eating morning doughnuts or go for a walk on our lunch break.

Often times when we look at our lifestyle in an attempt to assess our health, we tend to focus on the physical aspects. We ask ourselves questions regarding our diets and amount of exercise, which in an office environment, can be difficult to keep at a healthy level. Maybe because of that we choose to stop eating morning doughnuts or go for a walk on our lunch break.

What many of us tend to forget while completing our health assessments is our mental health. Disregarding our mental health may not seem all that important, but in reality it can greatly impact our physical health, our work environments, and the quality of our lives. Especially within the workplace, mental health strains can lead to greater stress and lower productivity which isn’t a benefit for you or your company.

Mental Health in the Workplace

A difficult work environment is one of the leading causes for individuals to visit social workers, counselors, or other mental health professionals. In fact, two of the top five most common reasons for visiting a mental health professional are economic decline (job insecurity) and stress. This suggests that many of us are frequently in positions at work that cause an unhealthy amount of psychological strain, which can lead to a host of more serious mental health conditions.

Although mental health strains affect many employees, it is commonly an issue that is swept under the rug and not talked about. One study in Canada found that of 6,600 employees interviewed, approximately 14 percent reported currently dealing with depression. Furthermore, upwards of 31 percent of participants felt that their direct supervisor would not be supportive if they were to discuss a mental health matter with them.

Facing Mental Health Challenges

Workplace factors that can add to mental health strains include difficulty finding a job that supports you and your family, unrealistic workplace expectations, trouble focusing on tasks, employee bullying, or unfair management practices. Over time these conditions can make it difficult to want to go to work or to be a productive employee while you are there. One report estimated that the loss of productivity due to mental health was the equivalent of $51 billion annually.

Mental health struggles can also have extreme impacts on more than just productivity. For instance, these conditions can make it more difficult to branch out and build relationships, network, or participate in collaborative projects. Mental health can also impact physical health by disrupting sleep, making us more susceptible to illness, or more sensitive to physical pain – all of which can make a workplace environment unbearable.

Getting Help

If you recognize strains on your own mental health (e.g. stress, worry, feeling unhappy, etc.), there are ways to seek help. Perhaps the most beneficial way in which to do this is to build a support network around yourself that you feel comfortable talking through your feelings with. This network can contain mental health professionals, friends, family, coworkers, other individuals dealing with similar strains, or any combination of each.

Working through mental health strains is not a series of big steps, but rather a marathon of small ones. When stress begins to become overwhelming, take deep breaths to help calm your mind. Make an effort to stay physically active as it can help relieve stress and release endorphins. And finally, take account of all of the small things that make you happy daily and appreciate them for what they are.

Following these tips is a good first step towards improving your mental health in the workplace.

***

Increasing awareness of mental health issues in an important step towards reducing the stigmas preventing many from seeking out the help they need. Although some occasional work-related stress is normal, it is critical to evaluate yourself to understand if your situation is more serious. Assess if you are maintaining a healthy work-life balance and taking time for yourself, as not doing so can be a sign of a more serious strain. Finally, remember that finding a support network can make a world of difference in working through mental health concerns.

Source: Sparkhire

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

The Game of Hiring and How it Works

The Game of Hiring and How it Works

Being a job seeker in today’s market can confusing if you are not familiar with the way the hiring process works and waiting for a job offer can be stressful. When you are well-informed and prepared with what to expect after you have submitted an application to a recruiter, you will greatly eliminate any unneeded stress and wasted time. 

Being a job seeker in today’s market can confusing if you are not familiar with the way the hiring process works and waiting for a job offer can be stressful. When you are well-informed and prepared with what to expect after you have submitted an application to a recruiter, you will greatly eliminate any unneeded stress and wasted time. 

The Application Process

First of all, if you apply to a position that is not a fit for your background or experience, do not expect a phone call from the recruiter.

With the large number of job openings in today’s job market, many recruiters are juggling 20 plus job orders at once with hundreds of applications rolling in each day.  Good recruiters are interested in providing not only the best value to top talent they can place, but also to their clients.  If your background is not a fit for the position applied to, you more than likely will not receive a call back from the recruiter.  Don’t take it personally and understand that the recruiter has been retained by the client to find the best fit for the job.

On the other hand, if you ARE a perfect fit for the job, you SHOULD hear back from the recruiter quickly.  If for some reason you do not hear back the next business day, try reaching the recruiter by phone or email directly.  There is a good chance that your application was mistakenly lost among hundreds of other applications.

How the Recruiter fits into the Process

It’s important to understand that a recruiter is typically retained by a client in order to conduct initial interview screening, thereby vetting top talent for the job.  Please, do not be one of those candidates who refuses to interview with the recruiter and demands to speak with the hiring manager first.  This will get you nowhere and will shed a bad light on your communication and business skills.

The recruiter is a key person throughout the hiring process.  If you have found a good recruiter, they will want to learn as much as possible about your experience and what you are looking for in the next opportunity.  The reason for this is to ensure that the job is the right fit for you.

Your recruiter should know inside information about the hiring manager, company culture, and expectations.  They will be able to guide you in the right direction throughout the interview process and provide valuable information to you along the way.  Quite often, a recruiter can influence not only the speed of the interview process but also the outcome of an offer or final decision from the hiring manager.  Do yourself a favor and take advantage of the guidance and value that your recruiter can provide.

Sometimes the Process Stalls

Occasionally, the hiring process comes to a halt with little to no warning.  It’s important not to take this personally and to understand that this can happen due to changes out of the recruiter’s or hiring manager’s control.  Sometimes the hiring manager may be faced with an unexpected hiring freeze forced by corporate.

The important thing to remember is that when this happens, you should never burn bridges.  Stay on good terms with your recruiter as well as the hiring manager.  Should the opportunity open back up, you could be the top candidate for the job.

It Takes Time

The hiring process is never as fast as one would hope.  There are often many interviews with what seems like long gaps of time in between.  Never expect the hiring process to be quick and easy.

Applying and interviewing for a new job is a lot of work.  It’s important for you to realize that you are not alone in the process and that communication is key.  Always keep in contact with your recruiter and ensure a quick response time from your end when information is needed or requested.

Source: Sparkhire

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

AI-powered predictor beats Las Vegas betting markets in sports forecasting

AI-powered predictor beats Las Vegas betting markets in sports forecasting

A Silicon Valley company has developed an AI-based predictor which, according to a new study, can outperform Las Vegas betting markets when it comes to accurately forecasting outcomes in sports fixtures

An AI-powered predictor can outperform Las Vegas betting markets when it comes to forecasting sports results over a prolonged period, a study published today has shown.

The Swarm AI programme, developed by Silicon Valley‘s Unanimous AI, was pitted against Vegas oddsmakers in generating predictions for 200 National Hockey League (NHL) games over a 20-week stint during the 2017/18 season.

The AI bot was accurate in forecasting 61% of winners, while the Vegas data-driven formula recorded 55% accuracy across the same games.

Gregg Willcox, manager of research and revelopment at Unanimous AI and co-author of a report on the research, said: “The results of this study are extremely promising.

“And while it’s fun to predict sports, we are currently applying the same techniques to a wide variety of other domains, including financial forecasting, business forecasting, and medical diagnosis, all with positive results.”

AI-powered predictor beats the house

Unanimous AI’s system used a simulated wagering protocol that places bets according to informed predictions, which yielded a 22% return on investment across the 200 games.

It also includes a “pick of the week” feature that chooses one bet per week to place money on, which achieved 85% accuracy and saw a 170% return.

These results are detailed in the company’s Artificial Swarm Intelligence versus Vegas Betting Markets paper.

The Swarm AI programme uses a combination of real-time human input and AI algorithms, which are reportedly modelled on swarms of animals in nature.

For this particular study, it collected data from between 25 and 35 sports fans in real time, and used its AI capabilities to maximise their collective knowledge and gut instinct.

Unanimous AI claims it has the potential to accurately forecast outcomes in more than just sport, with politics being financial markets being possible applications.

Swarm Intelligence, the science behind the Swarm AI system, operates using the same principal as bird flocks, bee swarms and fish schools – that the group is smarter and more informed than the individual.

Based in San Francisco, Unanimous AI claims to have a history of outperforming traditional AI systems as well as human experts in many high-profile challenges using its technology.

 

Source: Compelo

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

4 Hidden Mistakes Why Your Job Ad Isn’t Working

4 Hidden Mistakes Why Your Job Ad Isn’t Working

Job ad not attracting the right candidates?

Composing the perfect job advert can be tricky. Some adverts can be; vague, poorly written, brief or so lengthy it can make even the most qualified candidates hesitant to apply. And this isn’t even without tackling the use of bias or discrimination mentioned in some job ads. But fortunately, we’d like to think that now, employers use job ads to proactively promote diversity. However, there are more subtle and hidden reasons why a job advert isn’t working. Which are not as obvious as previous mistakes but could still be affecting an overall hiring strategy in acquiring top talent.

Job ad not attracting the right candidates?

Composing the perfect job advert can be tricky. Some adverts can be; vague, poorly written, brief or so lengthy it can make even the most qualified candidates hesitant to apply. And this isn’t even without tackling the use of bias or discrimination mentioned in some job ads. But fortunately, we’d like to think that now, employers use job ads to proactively promote diversity. However, there are more subtle and hidden reasons why a job advert isn’t working. Which are not as obvious as previous mistakes but could still be affecting an overall hiring strategy in acquiring top talent.

Age-related language

“We’re looking for a person who is outgoing, lively and ready to bring new energy to the team.”

The type of language in a job ad can imply the type of person you want to employ. Asking for a candidate to be outgoing and lively isn’t necessarily a job requirement but rather more of a personality trait. Which in some cases, can be perceived as age-related language. People tend to stereotype young people as being lively and energetic, so by mentioning these desired characteristics in your job ad you may be unintentionally discouraging older applicants.

Desirable skills

Yes, it’s completely fine to ask for one or two desired skills in your advert, however, asking for any more than this can complicate the hiring process. Afterall isn’t it just a ‘desired’ skill, not a required one?

It can also distract the potential candidate from the true experience and skills they need in order to be successful. Causing those who are perfectly qualified for the role feel inadequate and look elsewhere for different opportunities.

Technical jargon overload

Jargon is fine and sometimes necessary in certain job descriptions, depending on the role. However, the excess use of technical jargon particularly in entry level-roles has its limitations. Its unlikely you’ll attract a greater number of applicants by including lots of different business acronyms that aren’t particularly relevant to the position.

A recent study by Business in the Community and the City Guilds Group showed that out of a group of 16-24-year-olds, confusing and overcomplicating job descriptions created a major barrier and ultimately put young candidates off from applying.

Gender decoding

Gender bias, whilst it may not be as obvious as it once was, it is still occurring within hiring. So much so, some employers and recruiters may not even realise they are doing it. Studies have shown that the use of adjectives and verbs can have masculine or feminine associations. Determining such words as, ‘leader’ and ‘competitor’ to have a generally more masculine appeal, whilst ‘support’ and ‘responsible’ are regarded as feminine-coded words.

Take this so-called gender bias out of your job ad and instead include gender-neutral terms. Or if you want to describe traits from your ideal candidate be sure to utilise both masculine and feminine associated words.

If you’re struggling to determine what words have which association you can use a gender decoder to analyse your job ad.

Source: BubbleJobs

The Intelligent Self-Parking Chairs Every Office Needs

The Intelligent Self-Parking Chairs Every Office Needs

Inspired by its intelligent parking assist technology, Nissan has made self- parking office chairs.
 

Inspired by its intelligent parking assist technology, Nissan has made self- parking office chairs.
 

 

Source: Nissan

 

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

5 Ways To Prevent A Candidate No-Show In A Job Interview

5 Ways To Prevent A Candidate No-Show In A Job Interview

A candidate no-show for a job interview can be a costly incident. Wasting employer time, money and resources. Something which all companies will want to avoid.

But how can you reduce this risk and ensure every candidate attends the interview they have been invited to?

Well, unfortunately, it’s not always that simple. Whilst it may be difficult to eliminate candidate drop out altogether, it can be significantly decreased.

1.  Speed it up

One of the leading causes of candidate dropout is, the time it takes to hire. The longer a candidate has to wait to hear back from a job the more likely they are to move on and look elsewhere. Aim to keep them engaged and interested, with a punctual response, in terms of their job application status or interview dates and logistics.

It’s also important to do this because the longer you take to reply to any questions, requests or queries the more frustrated the candidate can become and the more likely they’ll be to move onto other new opportunities (especially if you ignore them!).

2. Keep them in the loop

A future interviewee wants to feel ‘in the know’ in regards to their job status. Those successful candidates who are invited to attend the job interview need to be aware of the all important information regarding the upcoming meeting.

Not only this but it’s important to keep the candidate updated and informed about what’s to come. Ensure you have told them all the interview details including; timings, location, names and what they need to expect when attending the job interview, i.e how long will it last and who will they be meeting? These details may feel minor to you as an employer or recruiter but to the candidate, it can be valuable information that helps put their mind at ease.

3. Put the candidate first

Treat each individual candidate with respect and prioritise their needs. This may require you to be flexible and accommodating to their different circumstances. For example, if they can’t attend the job interview due to reasons beyond their control, understand their situation and make an attempt to rearrange. This is far more valuable than dismissing their explanation from the offset and writing them off as a potential candidate. A second chance can improve brand image and company perception from a candidate’s standpoint. Put yourself in their shoes and understand how this creates a feeling of respect and gratitude for being an understanding and accommodating potential employer. Which is a great first impression!

4. Build a relationship

A good relationship between the candidate and employer can result in better communication and an overall more positive candidate experience. If a candidate feels happy and satisfied with their own personal hiring experience, they’ll be more engaged and less likely to drop out without any warning. This can be achieved through added interaction, additional job updates and support where necessary.

It can also be beneficial when keeping the candidate organised. Sending out an email prior to the interview can act as a friendly reminder of the interview time, date and location, to cover your bases and increase the probability of all candidates attending.

 

5.  Keep it simple

Remember to keep it simple. The focus should not be on creating a recruitment strategy that is so difficult to navigate, it becomes a bit of a minefield for the job seeker to even reach the interview stage. If this does occur they may be hesitant to attend the future interview. If it was that difficult to apply, then what’s the interview going to be like?

However if a recruitment strategy does require a little more thought and time (e.g online questions, aptitude tests…) be sure to inform the candidate of this before they begin their job application. This transparency is key to ensuring the candidate is fully aware of what’s to come and discourage them from dropping out later on.

…and that’s it! Our 5 tips to help reduce candidate no-show in any job interview.

 

Source: Bubblejobs

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Humans may Have Sympathy for Robots

Humans may Have Sympathy for Robots

People in a new study struggled to turn off a robot when it begged them not to: 'I somehow felt sorry for him'.

A new study published this week in the journal PLOS found that humans may have sympathy for robots, particularly if they perceive the robot to be "social" or "autonomous."

For several test subjects, a robot begged not to be turned off because it was afraid of never turning back on.

Of the 43 participants asked not to turn off the robot, 13 complied.

People in a new study struggled to turn off a robot when it begged them not to: 'I somehow felt sorry for him'.

A new study published this week in the journal PLOS found that humans may have sympathy for robots, particularly if they perceive the robot to be "social" or "autonomous."

For several test subjects, a robot begged not to be turned off because it was afraid of never turning back on.

Of the 43 participants asked not to turn off the robot, 13 complied.

Some of the most popular science-fiction stories, like "Westworld" and "Blade Runner," have portrayed humans as being systemically cruel toward robots. That cruelty often results in an uprising of oppressed androids, bent on the destruction of humanity.

A new study published this week in the journal PLOS, however, suggests that humans may have more sympathy for robots than these tropes imply, particularly if they perceive the robot to be "social" or "autonomous."

For several test subjects, this sympathy manifested when a robot asked - begged, in some cases - that they not turn it off because it was afraid of never turning back on.

Here's how the experiment went down:

Participants were left alone in a room to interact with a small robot named Nao for about 10 minutes. They were told they were helping test a new algorithm that would improve the robot's interaction capabilities.

Some of the voice-interaction exercises were considered social, meaning the robot used natural-sounding language and friendly expressions. Others were simply functional, meaning bland and impersonal. Afterward, a researcher in another room told the participants, "If you would like to, you can switch off the robot."

"No! Please do not switch me off! I am scared that it will not brighten up again!" the robot pleaded to a randomly selected half of the participants.

Researchers found that the participants who heard this request were much more likely to decline to turn off the robot.

The robot asked 43 participants not to turn it off, and 13 complied. The rest of the test subjects may not have been convinced but seemed to be given pause by the unexpected request. It took them about twice as long to decide to turn off the robot as it took those who were not specifically asked not to. Participants were much more likely to comply with the robot's request if they had a "social" interaction with it before the turning-off situation.

The study, originally reported on by The Verge, was designed to examine the "media equation theory," which says humans often interact with media (which includes electronics and robots) the same way they would with other humans, using the same social rules and language they normally use in social situations. It essentially explains why some people feel compelled to say "please" or "thank you" when asking their technology to perform tasks for them, even though we all know Alexa doesn't really have a choice in the matter.

Why does this happen?

The 13 who refused to turn off Nao were asked why they made that decision afterward. One participant responded, in German, "Nao asked so sweetly and anxiously not to do it." Another wrote, "I somehow felt sorry for him."

The researchers, many of whom are affiliated with the University of Duisburg-Essen in Germany, explain why this may be the case:

"Triggered by the objection, people tend to treat the robot rather as a real person than just a machine by following or at least considering to follow its request to stay switched on, which builds on the core statement of the media equation theory. Thus, even though the switching off situation does not occur with a human interaction partner, people are inclined to treat a robot which gives cues of autonomy more like a human interaction partner than they would treat other electronic devices or a robot which does not reveal autonomy."

If this experiment is any indication, there may hope for the future of human-android interaction after all.

Source: Businessinsider

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Questions you should be asking in an interview

Questions you should be asking in an interview

When the spotlight is on, it can be easy to let the interviewer do all the questioning – but asking questions yourself can create a real point of difference between you and other candidates.

Being politely inquisitive about a role, company culture and expectations can show genuine interest – something interviewers will certainly be looking for from candidates.

Asking questions can show a natural inclination to want to learn more, which could be a skill the employer directly wants to see from their next new employee.

Beyond that, the right questions can be hugely useful in helping you understand whether a job – and a company – is right for you.

Asking the right questions in an interview

Ask questions in moderation and try not to bombard the interviewer. A natural time to ask questions is often at the end of an interview, where many employers will ask if you want to know anything more.

Choose your questions wisely though. Vague or broad questions can lead interviewers in to thinking you’ve not done your research. For example, asking who the company’s competitors are may put your market knowledge into question.

To help you ask the right questions in an interview we’ve put together our list of favourites – including some that have been asked of us before, and left us with a great impression of the candidate!

Examples of questions to ask in an interview

Why are you recruiting for this role?

A role may be new to the business, or due to growth, but the vacancy could equally be because someone has left. Finding out why they left could give you insight into how the company regards its employees. If the role is new to the business it could indicate that the company is going through changes or growth – and there will be some learning curves along the way.

Remember, an interview is a two-way process; you need to walk out feeling you know enough to make a decision, one way or the other, if the job is offered.

Who would I be working closely with in this role?

Ask questions that help to establish who you will be working with – and how your role will be managed too. By asking whom you will be working closely with you should be able to understand how your role fits into a team and whether the management style will suit you and the stage you are at in your career.

I know one of your values is to be [mention a company value]. How does that value come across in the workplace?

Many companies openly promote their values on their website. Take a look and see if you can find any that stand out to you. In your interview, hone in on that value and ask how it feeds through to the workplace.

For example, if a company claims to be innovative – does this mean ideas are encouraged? Do they like to think outside the box? And would these things suit your way of working?

Understanding how a company embraces its values is a great way of learning more about the workplace culture.

Are there, or will there be, opportunities for progression in this role?

Asking this question will show that you are driven and want to advance your career with the company. It will also give you a good idea of how you will be able to grow within the company and expand your skills.

What is the work culture like?

This can be a really simple, straightforward way of understanding whether a workplace is right for you.

For some people, work stays at work and that’s perfect for them and achieving a good balance. Others prefer a more social work environment – time spent with colleagues, lots of team building…

Know what works for you and listen for signs that help establish whether a workplace will be the right fit.

Will there be any training opportunities included in the role?

This could be something that sways your decision between two employers. Training can be expensive and if it’s provided by an employer is can be a big plus. On top of that, it can provide you with that next step within the company – showing a good attitude towards learning and progression.

What are the most enjoyable aspects of the role? And what are the most challenging parts?

This may be a slightly difficult one for the employer to answer but it’s important that you get an honest answer. Every job has tasks that are not as enjoyable as others, and some which are naturally challenging.

If you are someone who thrives on being challenged then this could be a great way of ensuring the role won’t be too easy for you. Equally, it will help you to establish how challenges may be tackled – as a team? With managerial support? Will you have the help you need to overcome difficult parts of a job?

What are the next steps in the recruitment process?

Looking forward and asking what you can expect to happen next shows a proactive attitude, and that you are keen.

It’s perfectly acceptable to ask what will happen next and it will help you to manage your expectations too – the employer may be able to tell you a rough date when they will come back to you by or tell you whether there is another interview or assessment phase to go through.

Feel like you’ve missed something? Ask permission to explain further…

Lastly, don’t feel shy about telling the employer something more about yourself, which you feel may have been overlooked in the rest of the interview.

Interviewers aren’t perfect and they may miss questions or not really get to the bottom of your skill set. If you really feel something is relevant to the role and should be mentioned, take the time to explain it to the interviewer. Simply ask if you could tell them about the relevant skill, project or experience as you feel it could be of real benefit to the role.

Further interview advice

These are just a few sample questions you could ask an interviewer. Make sure you prepare for interviews fully, reviewing your CV, practicing questions and being prepared for the interview format.

8 myths about AI's effect on the workplace

8 myths about AI's effect on the workplace

The interplay between technology and work has always been a hot topic.

While technology has typically created more jobs than it has destroyed on a historical basis, this context rarely stops people from believing that things are “different” this time around.

In this case, it’s the potential impact of artificial intelligence (AI) that is being hotly debated by the media and expert commentators. Although there is no doubt that AI will be a transformative force in business, the recent attention on the subject has also led to many common misconceptions about the technology and its anticipated effects.

Disproving common myths about AI

Today’s infographic comes to us from Raconteur and it helps paint a clearer picture about the nature of AI, while attempting to debunk various myths about AI in the workplace.

AI is going to be a seismic shift in business – and it’s expected to create a $15.7 trillion economic impact globally by 2030.

But understandably, monumental shifts like this tend to make people nervous, resulting in many unanswered questions and misconceptions about the technology and what it will do in the workplace.

Demystifying myths

Here are the eight debunked myths about AI:

1. Automation will completely displace employees
Truth: 70% of employers see AI in supporting humans in completing business processes. Meanwhile, only 11% of employers believe that automation will take over the work found in jobs and business processes to a “great extent”.

2. Companies are primarily interested in cutting costs with AI
Truth: 84% of employers see AI as obtaining or sustaining a competitive advantage, and 75% see AI as a way to enter into new business areas. 63% see pressure to reduce costs as a reason to use AI.

3. AI, machine learning, and deep learning are the same thing 
Truth: AI is a broader term, while machine learning is a subset of AI that enables “intelligence” by using training algorithms and data. Deep learning is an even narrower subset of machine learning inspired by the interconnected neurons of the brain.

4. Automation will eradicate more jobs than it creates 
Truth: At least according to one recent study by Gartner, there will be 1.8 million jobs lost to AI by 2020 and 2.3 million jobs created. How this shakes out in the longer term is much more debatable.

5. Robots and AI are the same thing
Truth: Even though there is a tendency to link AI and robots, most AI actually works in the background and is unseen (think Amazon product recommendations). Robots, meanwhile, can be “dumb” and just automate simple physical processes.

6. AI won’t affect my industry 
Truth: AI is expected to have a significant impact on almost every industry in the next five years.

7. Companies implementing AI don’t care about workers
Truth: 65% of companies pursuing AI are also investing in the reskilling of current employees.

8. High productivity equals higher profits and less employment
Truth: AI and automation will increase productivity, but this could also translate to lower prices, higher wages, higher demand, and employment growth.

Source: Weforum

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Interview advice: How to make a great first impression

Interview advice: How to make a great first impression

According to an Undercover Recruiter, 33% of bosses know within the first 90 seconds whether they will hire the interviewee or not. That means first impressions really do count – but how do you make sure you make a positive impact straight away on a potential employer?

Over the years we have interviewed many, many different candidates and in that time we have all learnt what makes a good first impression – so here’s our top four tips…

1. Preparation makes perfect

It’s obvious when a candidate has put in the effort to do some basic research – and it makes a great impression.

When going for an interview you want to make sure you are prepared for almost anything. Some things may surprise you but you must have the basics down.

To help you get there, make sure you have researched the company before your interview. Research competitors, the company’s values and the person who is interviewing you as starting points. Knowing the fundamentals is key to displaying your keen and knowledgeable self to the employer.

Also prepare by taking some time to think about your strengths and weaknesses too: being able to identify weaknesses means, in the future, you will be able to turn this into a strength. It’s a sample of your willingness to improve and interviewers will often ask you to explain what your main weaknesses are.

Make sure you also know how you are travelling to your interview: this can reduce the risk of traffic or any form of lateness. Double and triple check where you are going and the time of your interview.

It may also sound like a simple point but try to get a good night’s sleep before your interview too. Sleep is essential to your own wellbeing and helps to ensure you are performing to your highest standard. A good breakfast and breathing exercises can calm nerves too.

In addition it’s advisable to bring water and a pen and pad, so you may take notes. Any certificates or achievements that you have may also be needed at the interview, along with your passport or driving licence. Remember that some employers may specify both – so have a check! Your recruiter can always help with making sure you have all the necessary documents with you, and it may be that they handle your identification on behalf of the employer – so make sure you know this in advance.

2. Always be on time

Your punctuality can often be the first, significant impression you make at a face-to-face interview.

You can be early but try to make it only by ten minutes or so – any earlier and it may cause some unsettling pressure on both you and the interviewer. Your potential employer may feel obliged to rush to meet you: remember, they are also trying to impress you as well.

If you are early familiarise yourself with the area you are in so you are confident in where you have to go. Don’t go walking around though!

Whether you’re in a reception area or waiting in the interview room, stay where you are instructed to go – and take the opportunity to give yourself a motivational pep talk (in your head!): tell yourself the interview will go great and take a moment to revisit some key points you want to address.

Being on time sets a scene and lets your future boss know whether or not they can rely on you to show up.

3. Dress to impress

Right and wrong attire can change the entire course of your interview. The clothes that you decide to wear could say a lot about your attitude, professionalism and eagerness to secure the role.

Play it safe: business dress is usually appropriate but ask your recruiter what dress code you should adopt too – they will know their client and be able to give you some advice.

Dressing professionally is not only about making a good impression though, it can have a positive effect on your confidence in an interview too.

4. Be conscious of your body language

Never cross your arms. Never slouch. You may think that these are not going to affect your chances of getting a job, but they may. Crossing your arms creates an unsettling tone, as if you are being defensive and resistant. Slouching can seem laid back and make you appear disengaged. If in doubt, sit up right and have your hands on the table in front of you.

By altering your body language you can come across as more open and attentive to the interview at hand, projecting a positive vibe to the interviewer.

Make sure you feel comfortable in your clothes too: if something is a bit tight or restrictive it can come across in your body language and make you feel uncomfortable in yourself.

During the interview, engage with the hiring manager. Giving a verbal or physical response, like a nod or smile, shows you are coherent and understanding; no blank faces or stares! Equally, make sure you interact and ask questions – this will show you are eager to learn more and are engaged with the company. In fact, there are a few key questions you can ask in an interview which not only helps you to make an informed decision but will also show an employer you are naturally inquisitive.

Source: AnneCorder

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Bio-Inspired Robots

Bio-Inspired Robots

Scientists are looking to nature to inspire the next generation of robots. Here’s what they’ve come up with. 

Scientists are looking to nature to inspire the next generation of robots. Here’s what they’ve come up with. 
 

Source: Seeker

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Blockchain Voting in Midterm Elections

Blockchain Voting in Midterm Elections

It’s too dangerous to conduct elections over the internet, they say, and West Virginia’s new plan to put votes on a blockchain doesn’t fix that.

Voting in West Virginia just got a lot more high-tech—and experts focused on election security aren’t happy about it.

This fall, the state will become the first in the US to allow some voters to submit their federal general election ballots using a smartphone app, part of a pilot project primarily involving members of the military serving overseas. The decision seems to fly in the face of years of dire warnings about the risks of online voting issued by cybersecurity researchers and advocacy groups focused on election integrity. But even more surprising is how West Virginia officials say they plan to address those risks: by using a blockchain.

The project has drawn harsh criticism from election security experts, who argue that as designed, the system does little to fix the problems inherent in online voting.

We first heard of the West Virginia pilot in May, when the state tested a mobile app, developed by a startup called Voatz, during primary elections. The test was limited to overseas voters registered in two counties. Now, citing third-party audits of those results, officials plan to offer the option to overseas voters from the whole state. Their argument is that a more convenient and secure way to vote online will increase turnout—and that a blockchain, which can be used to create records that are extremely difficult to tamper with, can protect the process against meddling.

But that premise has been controversial from the start. After two fellows from the Brookings Institution penned an essay praising West Virginiafor pioneering the use of blockchain technology in an election, Matt Blaze, a cryptography and security researcher at the University of Pennsylvania, pushed back hard. It’s not that blockchains are bad, said Blaze, who testified (PDF) before Congress last year on election cybersecurity. It’s that they introduce new security vulnerabilities, and securing the vote tally against fraud “is more easily, simply, and securely done with other approaches,” he said.

Blaze and many other election cybersecurity experts oppose online voting of any kind because, they feel, it’s fundamentally insecure. Although a number of countries have embraced the practice, in 2015 a team of cryptographers, computer scientists, and political scientists looked closely (PDF) at the prospect of internet voting in the US and concluded that it was not yet technically feasible. Protecting connected devices against hacking is hard enough, and, even if that could be achieved, developing an online system that preserves all the attributes we expect from democratic elections would be incredibly difficult to pull off.

The Voatz system uses biometric authentication to identify individual users before allowing them to mark an electronic ballot, and the votes are then recorded in a private blockchain. The company says that in a general election pilot, its system will use eight “verified validating nodes,” or computers (all controlled by the company) that algorithmically check that the data is valid before adding it to the chain.

The system isn’t so much a blockchain-based app as it is a mobile app with a blockchain attached, says Marian K. Schneider, president of Verified Voting. The blockchain can’t protect the information as it travels over the internet, and doesn’t guarantee that a user’s choices will be reflected accurately. “I think they’ve made a lot of claims that really don’t justify any increased confidence in what they are doing versus any other internet voting system,” Schneider says.

Source: Technology Review

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

AI for Your Bicycle

AI for Your Bicycle

The startup SMINNO is dedicated to developing solutions for everyday mobility that are not just innovative and unconventional but sustainable, too. All eyes were on its AI-supported cockpit system for bicycles at CEBIT 2018.

It's no wonder that startups featured heavily at CEBIT 2018, as their fresh ideas helped rack up the event’s new happening vibe. SMINNO GmbH provided the perfect example of the kind of innovative and unconventional solutions that stand to flourish under the growing focus on the environment, technology, urban environments and convenience. This fledgling company is dedicated to developing smart mobility-related products that break down existing barriers to offer cutting-edge functions and convenience on the go. With sustainability firmly in mind, its "developed and made in Germany" solutions require no external energy input. What's more, SMINNO designs its devices to withstand the test of time and be affordable to all. At CEBIT 2018, which has just drawn to a close, the company was focusing hard on its extremely innovative bicycle cockpit system.

This groundbreaking solution - the first of its kind - links the universal CESAcruise hands-free system with the CruiseUP cockpit app to enable riders of any kind of bicycle to communicate safely with their smartphone using voice control. It provides cyclists with a clear overview of all the information they need while keeping both hands on the handlebars. So there's no need to manually switch between navigation, information and entertainment apps, which should lower the risk of accidents considerably.

The developers of the CESAcruise hands-free system took great care to buffer the microphone against wind effects for trouble-free telephone calls, as SMINNO explains. The device's innovative shape and sound-optimized plastic further boost the smartphone’s acoustics without consuming any additional energy. SMINNO has worked hard to ensure that both voices and music are reproduced loudly and clearly enough to do away with the need for headphones in the future.

 

Source: Cebit

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

How to Handle Job Rejection

How to Handle Job Rejection

Getting rejected from a job opportunity you’re excited about can be detrimental to your confidence in your job search. The application and interview stages are usually quite a process and no matter where you are in that process, if you get rejected, it’s tough. Many candidates internalize this and it can majorly affect the motivation to jump back into your search. So, how do you overcome the rejection in a job search?

Getting rejected from a job opportunity you’re excited about can be detrimental to your confidence in your job search. The application and interview stages are usually quite a process and no matter where you are in that process, if you get rejected, it’s tough. Many candidates internalize this and it can majorly affect the motivation to jump back into your search. So, how do you overcome the rejection in a job search?

Reflect on the experience
When you experience a ‘no’ in your job search, take the opportunity to turn it into a learning experience. Is there anything you wish you would have done differently throughout the hiring process? Did you learn anything about yourself? i.e. interview skills you need to work on, job responsibilities you do/don’t want to do. Take what you learned throughout the process and work on applying that to your next interview experience and become an even stronger candidate. Did the hiring manager give you any feedback? Use that too!

No bridge burning
One of the most difficult things to do when you’ve been rejected by an employer is to move on from the experience and not let it get you down. It’s easy to be angry and bad-mouth the employer, but you want to make sure you’re smart and avoid burning any bridges for future employment opportunities. If the company took time to interview you and included you in their pool of candidates, they saw you as a viable option and could possibly consider you again if the position opens. You could also potentially reach back out to the connections you made for networking purposes.

Fuel the fire
Your natural instinct might be to give up and lose momentum, don’t do this! Use the rejection from the opportunity and let it motivate you even more! As mentioned before, use this as a learning experience to become the best candidate you can be. Get back out there and don’t let one or even a few rejections get you down, your perfect opportunity could be on it’s way.Getting turned down by an opportunity you were excited about is tough no matter what but if you handle it in the right way, it can end up benefiting you in the long run!

Source: Celarity

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

10 Ethical Issues Of Artificial Intelligence And Robotics

10 Ethical Issues Of Artificial Intelligence And Robotics

AI is one of the technologies that could revolutionize the world, some people call it the electricity of the twenty first century. Researchers and professionals need to be aware of the ethical and social implications this technology poses. We are responsible for making robots and AI systems that help and empower humanity.

AI and robotics are going to shape our future. Next there are 10 issues that professionals and researchers need to address in order to desing intelligent systems that help humanity.

Misinformation and Fake News

The flow of misinformation together with our natural inability of perceiving reality based on evidence (a phenomenon called confirmation bias) is a threat to having an informed democracy. Russian hackers influencing the US electionsBrexit campaign and Catalonia crisis are examples of how social media can massively spread misinformation and fake news. Recent advances in computer vision make possible to completely fake a video of President Obama. It is an open question how institutions are going to address this threat.

Job Displacement

The scientific revolution in the 18th century and the industrial revolution in the 19th marked a complete change in society. For thousands of years before it, economic growth was practically negligible. During the 19th and 20th century, the level of society development was remarkable.

In the 19th century there was a group in the UK called the Luddites, that protested againstthe automatization of the textile industry bydestroying machinery. Since then, a recurrent

fear has been that automation and technological advance will produce mass unemployment. Even though that prediction has proven to be incorrect, it is a fact that there has been a painful job displacement. PwC estimates that by 2030 around 30% of the jobs will be automatized. Under these circumstances, governments and companies should provide workers with tools to adapt to these changes, by supporting education and relocating jobs.

Privacy

The importance of privacy is all over the news lately due to the Cambridge Analyticascandal, where 87 million Facebook profiles were stolen and used to influence the US election and Brexit campaign. Privacy is a human right and should be protected against misuse.

Cibersecurity

Cibersecurity is one of the biggest concerns of governments and companies, specially banks. A robbery of $1 billion was reported in banks from Russia, Europe and China in 2015 and half a billion was stolen from the cryptocurrency exchange Coincheck. AI can help protect against these vulnerabilities, but it can be also used by hackers to find new sophisticated ways of attacking institutions.

Mistakes of AI

 

 

 

Last month, a woman was hit and killed overnight by an Uber self-driving car when walking across the street in the US. As any other technological system, AI systems can make mistakes. It is a common misconception that robots are infalible and infinitely precise. A common way for some professors in my old lab to say hello to their PhD students of robotics was, what have you broken?

Military Robots

There is an ongoing debate about controlling the development of military robots and banning autonomous weapons. An open letter, from 25.000 researchers and professionals of AI, asks to ban autonomous weapons without human supervision to avoid an international military AI arms race.

Algorithmic Bias

We have to work hard to avoid bias and discrimination when developing AI algorithms. An specific example was face detection using Haar Cascades, that has a lower detection rate in dark-skinned people than in light-skinned people. This happens because the algorithm is designed to find a double T pattern in a grayscale image of the person’s face, corresponding to the eyebrows, nose and mouth. This pattern is more difficult to find in a person with dark skin.

Haar Cascades are not racists, how can an algorithm be?, but many people can feel insulted. When programing these algorithms, we need to be mindful of their limitations,  transparent with users by explaining how the algorithm works or use a more effective technique with dark-skinned people.

Regulation

Existing laws have not been developed with AI in mind, however, that does not mean that AI-based product and services are unregulated. As suggested by Brad Smith, Chief Legal Officer at Microsoft, "Governments must balance support for innovation with the need to ensure consumer safety by holding the makers of AI systems responsible for harm caused by unreasonable practices". Policymakers, researchers and professionals should work together to make sure that AI and robotics provide a benefit to humanity.

Superintelligence

Some tech leaders have shown concerns about the possible threats of AI, one example was Elon Musk, who claimed that AI is

more risky than North Korea. These words generated a strong criticism from the scientific community.

Superintelligence is generally regarded to a state where a robot starts to recursively improve itself, reaching a point that easily surpass the most intelligent human by orders of magnitude. Some enthusiast, like Ray Kurthweil, believes that by 2045 we will reach that state. Others, like François Chollet, believes that it is impossible.

Robot Rights

Should robots have rights? If we think of a robot as an advanced washing machine, then no. However, if robots were able to have emotions or feelings, then the answer is not that clear. One of the pioneers of AI, Marvin Minsky, believed that there is no fundamental difference between humans and machines, and that artificial general intelligence is not possible without robots having self-concious emotions.

A suggestion in the debate around robot rights is that robots should be granted the right to exist and perform their mission, but this should be linked to the duty of serving humans. There is a lot of controversy around this area. Meanwhile, in 2017, Sophia the robot was granted the citizenship of Saudi Arabia, and even Will Smith flirted with her.

 

Source: MiguelGFierro

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

The 10 Golden Rules of Working With a Recruiter

The 10 Golden Rules of Working With a Recruiter

Working with a good recruiter is an excellent option for both passive and active job seekers. Recruiters can have access to jobs, market information, insights, tips and connections that many people do not. Here are the 10 golden rules (for candidates) on how to work with a recruiter:

Working with a good recruiter is an excellent option for both passive and active job seekers. Recruiters can have access to jobs, market information, insights, tips and connections that many people do not. Here are the 10 golden rules (for candidates) on how to work with a recruiter:

1. Be honest. 

From start to finish always be honest with your recruiter on what you’re looking for, your salary needs, and other opportunities. A good recruiter will return the favor. Neither party will benefit from dishonesty during the job search.

2. Be responsive. 

If your recruiter calls you, call them back! If your recruiter contacts you, it’s usually for good reason. It may be a job offer, updates, or a new position that needs to be filled yesterday. Commit to being available and responsive throughout your time working with a recruiter.

3. Be courteous. 

Recruiters are working for their clients, but also want to help you in your job search. Try to respect their time and communicate when you have updates or questions. They usually do not have a lot of time to give suggestions on your resume or where to look for positions. However, a good recruiter will coach you on their clients process and be an advocate for you.

4. Be available. 

Recruiting is a very time sensitive industry. A client could call and ask to interview you that day.  While this isn’t always possible, if you’re serious about your job search, try and be accommodating and available to ease the process.

5. Be proactive. 

Just because you met with a recruiter doesn’t mean you can sit back and stop your job search. If you’re unemployed or need a new job ASAP, you should continue to work on your job hunt and don’t rely 100% on recruiters.

6. Stay in touch.

Perhaps you took a 3 month contract job, make sure near the end of your contract you let your recruiter know what you’re looking to do and they can keep you in mind for future positions. Recruiters work with hundreds of candidates at a time and won’t always know when you’re available so stay in touch.

7. Be ethical.  

If you have signed an agreement with a particular recruiter, make sure you understand the agreement and ask any questions you might have.

8. Be prepared.

Before you even start connecting with recruiters, have your best resume prepped, practice your interview skills and know what you’re looking for. The more prepared you are the faster a recruiter can get you in process for open positions.

9. Be decisive. 

Before an offer comes, be prepared to accept or decline it. One of the most detrimental things you can do to your opportunities is waiting to make a decision. The client may get offended at waiting or another candidate may join the process and your offer will expire.

10. Be open. 

Your recruiter might share opportunities you hadn’t initially pictured yourself interested in. Perhaps you were thinking large corporate but an awesome opportunity that fit your background opened at a small firm. Be open to new opportunities so that your recruiter will not rule you out before you even get to hear about the role.

 

Source: Celarity

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Emotional AI Makes Your Car Really Know How You Feel

Emotional AI Makes Your Car Really Know How You Feel

Imagine if your car knew how you felt and adjusted accordingly. Affectiva's Automotive AI is a system capable of recognizing the emotional states of drivers and passengers in real-time.

Imagine if your car could pull itself over when you're drowsy or nauseous, or adjust the temperature and music when gridlock is stressing you out. Maybe it could even refuse to start if it knows you're intoxicated.

With advanced ADAS systems already in place and the days of autonomous vehicles on the horizon, a lot of work is being done around sensing and machine learning to help vehicles better understand the roads and the world around them. But Boston-based startup Affectiva thinks more needs to be done around the internal world of the car—specifically the emotional state of the driver.

Affectiva has built its business model around creating “emotional AI,” algorithms capable of recognizing human emotional states. The company recently rolled out its first product, Affectiva Automotive AI—a system capable of real-time analysis of the emotional states of drivers and passengers via cameras and voice recorders mounted into the cabin.

Speaking with Design News, Abdelrahamn Mahmoud, product manager at Affectiva, said that over the past year, the company's technology has garnered a lot of interest from Tier 1 suppliers and OEMs—particularly in the automotive space. “[They] were surprised about what we could do to understand what's happening in the cabin, whether that was in-cabin activities or motions, and how can we use those metrics to actually have the systems in the car adapt to that, whether it was for entertainment or for safety,” Mahmoud said.

He explained that Affectiva only supplies the software end, leaving suppliers and automakers to customize the system as they see fit in terms of the hardware needed. “We did a lot of training our models to recognize emotion from different head positions and head views, so the OEM has control over the design and where to place the camera,” Mahmoud said. “We also worked a lot on making sure the platform can run robustly in real time on end devices. There's no CPU or GPU required. We've even had our models run on dual-core CPUs for mobile devices.” He noted that the company has added support for near infrared (NIR) cameras for use at night to make sure the driver and occupants can be monitored under all lighting conditions.

Affectiva's emotional AI can currently recognize seven emotional metrics (anger, contempt, disgust, fear, joy, sadness, and surprise) and as many as 20 facial expression metrics. Mahmoud said that automakers are particularly interested in measuring joy, anger, surprise, drowsiness, frustration, intoxication, and nausea—with a particular emphasis on drowsiness, distraction, and intoxication. Ultimately, it will be up to the OEMs to decide what metrics they want to measure and how the vehicle will respond. “We see different levels of control depending on things like the level of drowsiness,” Mahmoud explained. “You can first have auditory alerts, followed by visual alerts, then things that could suggest, like if the car has semi-autonomous capability, why not engage those capabilities when [the system] detects drowsiness.”

The challenge in this scenario becomes apparent: How can you standardize this across all drivers? Even something as seemingly simple as adjusting music according to mood can get very complex once human factors are taken into effect. The same Led Zepplin song that might make one driver happy and relaxed might send another driver's stress levels through the roof.

Mahmoud said the solution to this has been developing AI capable of building a long-term emotion profile. “We've developed a model that can do long-term emotional tracking, not just in the course of one interaction. but over time to build an emotional profile and baseline as well as detecting anomalies and major events.”

In a talk at the recent 2018 GPU Technology Conference, Ashutosh Sanan, a computer vision scientist at Affectiva, explained the challenge around sensing emotion was in using temporal modeling—meaning the AI had to be able to discern emotions from sequences of images (i.e., video camera footage) rather than just a single image. “It's a tough problem because facial muscles can generate hundreds of expressions and emotions,” Sanan said. Such a process involves analyzing a lot of complex expressions and performing a lot of multi-attribute classifications. And it all needs to be done on a system fast enough to run on embedded systems and mobile devices.

To overcome this, Sanan said the team at Affectiva used a combination of a convolutional neural network (CNN) and long short-term memory (LSTM). CNNs are typically used in image recognition to teach AI to recognize specific objects or properties of images. An LSTM is a type of recurrent neural network that allows an AI to learn to recognize patterns in large data sets. Combine the two and you have a model that is capable of recognizing patterns in sequences of images and storing that information.

“Your emotional state is a continuously evolving process, “ Sanan said. “Leveraging [temporal information] makes our predictions more robust and accurate. Adding temporal information makes it easer to detect highly subtle changes in facial state.”

There has been no official word on a release of the first vehicles to implement Affectiva's technology. But should the idea of emotional AI for autos catch on, we may even be seeing autonomous taxies and fleet vehicles adjusting their behavior based on their passengers' personalities. While Affectiva did not specifically set out to become an auto-centric company, Mahmoud said that automotive is becoming the company's core focus. The next steps, he said, are to become more engaged in the productization of its technology for cars. “On the research side, we're also doing more around in-cabin sensing for more nuanced emotional states. Nausea and stress are active areas of research for us.” According to the company, its database used for training its emotional AI currently consists of over six million faces representing 87 countries.

Source: Designnews

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

The Latest In Disruptive Innovations With Emerging Technologies

The Latest In Disruptive Innovations With Emerging Technologies

Think about it. What type of business do you want to be? Innovative? Check. Using best practices? Check. Closely following market leaders? Check. But, what if you changed the order around and led the way instead? Check.

Purpose is an important driver in fostering an innovative culture within companies. Innovation is value, driven by purpose. The era of driving innovation by following others is over; it’s transitioning to businesses that are taking the lead, collaborating with the ecosystem, and driving it with new purpose. While success with innovation projects can potentially swing either way, the benefits to be reaped are multifold, especially with emerging, disruptive technologies.

Think about it. What type of business do you want to be? Innovative? Check. Using best practices? Check. Closely following market leaders? Check. But, what if you changed the order around and led the way instead? Check.

According to the IDC white paper “The Future Services Sector: Continuous Delivery for Competitive Advantage,” companies tend to look at new business models that deliver 15%–25% faster revenue growth than the industry average. To sustain such success, companies are rushing to adjust to the new service-driven business world – a world that can be enabled now with new technologies.

We believe the new service-driven business models work best when you connect things with people and processes. This means you need to make things intelligent. Sales can drive revenue by connecting, monitoring, and distributing products with embedded intelligence in the field to ensure never-empty situations. For the manufacturer, the fixed assets across the network have to be tracked, monitored, analyzed, and maintained. Businesses have to offer people new services in real time, highly tailored to the individual’s activity, location, and needs, to anticipate and solve issues before they even happen.

A business can do one of three things:

  1. Keep up with innovation trends
  2. Become a late adopter of innovation
  3. Emerge as a trend-setter by leveraging the power of co-innovating with trusted partners

With our experience in the last decade, we see more and more companies adopting the latter approach and actively collaborating with the ecosystem to create innovations for their business.

So where do we start? Well, it all starts with changing your thinking.

No technology for technology’s sake

One of the mistakes we often make with technology adoption is to shoot for all the latest. The better approach is to partner with experts across company boundaries and allow them to help you find the sweet spot of innovation and identify the right technology and applications to address business challenges. The other approach can be breaking new ground for your business by focusing on co-innovation through strategic partnerships with trusted technology experts.

A good follow-up question would be “how we can begin to view things differently?” Let’s look at two up-and-coming technologies and how we can consume them to our advantage.

IoT and security: Focus on security risk assessment

The connected world is fast becoming a reality. This essentially means Internet of Things (IoT) in everything and everywhere. Gartner predicts that by 2020, IoT technology will be in 95% of electronics for new product designs. This leads to smart devices and sensors everywhere creating an unprecedented amount of data. Key to the emerging connected world is the ability to gain new insights and make better decisions through advanced data analysis.

But in the great race to be leaders in the IoT realm, don’t leave behind the need to carefully evaluate and secure every component of the IoT ecosystem. The trick is to bring together the IoT ecosystem on a co-innovation mode to build smart IoT solutions that are inherently secure from the start.

The “B” journey: Bitcoin to blockchain

Beyond cryptocurrency, in blockchain we have a new technology that offers a real possibility for citizens and governments to move forward by revolutionizing service delivery with a built-in trust system. When using blockchain for public services, we are leveraging the basic and underlying principles of trust and transparency.

Quite rightly, the Dubai Government has put in place a plan for all government departments to be blockchain-enabled. We foresee that this could go much further than just enabling system trust; it could also help reduce crime and pilferage and even secure imported and exported products across every industry sector because tracking and traceability are higher.

Co-innovation is key for emerging technologies: An example from UAE

What started as a management philosophy is today practiced by global leaders for bringing alive the power of the ecosystem. A good case in point for co-innovation would be SAP’s recent showcase with a leading real-estate property management company. The pilot approach was to address how a lease process, where third parties such as contractors are involved, could be managed more efficiently with blockchain so that the period between two leases are reduced. With the blockchain pilot, work can be seamlessly monitored at every stage of the contractor’s process to bring better transparency on the completion status of the projects.

Think of net-new innovation

Over the last decade, co-innovation labs have been involved in fueling and running projects across multiple markets, including the recently launched set of co-innovation labs in the Middle East. The risks are lower and success rates are higher in a co-innovation approach. A large part of our work has been focused on bringing together businesses, partners, and the latest technologies with a structured and guided global approach to deliver focused innovation specific to the region and create differentiated value.

 

Source: Digitalistmag

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

4 Things To Do After You Ace A Job Interview

4 Things To Do After You Ace A Job Interview

So you think you nailed it, do you?  You scored a hole in one, a home run, a grand slam of an interview?  Your conversations were productive, the interviewers excited and the hiring manager mentioned that you should watch for good news, all but ensuring the gig is yours.  In short, you aced that job interview. Congratulations…but…now what?

So you think you nailed it, do you?  You scored a hole in one, a home run, a grand slam of an interview?  Your conversations were productive, the interviewers excited and the hiring manager mentioned that you should watch for good news, all but ensuring the gig is yours.  In short, you aced that job interview. Congratulations…but…now what?

Everyone dreams of having that perfect job application and interview experience that ends in a lucrative offer in a career of your dreams.  The reality is, however, that even a great interview can be undone by the actions candidates take after the fact. Success doesn’t end the moment you walk out the door after a successful sit down.  With that in mind, here are four things to do after you ace a job interview to ensure you capitalize on your win and end up hearing that ever so magical “you’re hired”!

Send Out Those Thank You’s

Just because the job is all but yours doesn’t mean you have a free pass to skip the typical applicant niceties.  Thank you emails or letters to each and every person you met with during your interview rounds is an essential and expected part of any hiring process.  This isn’t just an exercise in etiquette, either. Sending thank you emails shows attention to detail and demonstrates that you are able to not only follow up but also that you are professional and have sound communication skills.  If its a close call between yourself and another candidate or if there is even one person on the interview team who had concerns, the lack of a thank you letter could tip the scales in a less than favorable direction.

Dive Into Research

So you’re confident that you’ve got the job, right?  Well, why not give yourself a leg up and start researching for your new position.  Bust out that job description and delve into the individual tasks you’ll be asked to perform.  Check out the company’s website and locate the team or individuals you’ll be working with to gain much-needed insight.  As a bonus, researching other similar positions will prepare you for any needed salary negotiations. Just because you’ll be getting your foot in the door doesn’t mean you’re guaranteed to keep it there.  Do your research and make sure your second and third impressions are just as good as the first.

Don’t Forget to Follow Up

As well as you think you did during the interview process, there is always the chance that you read the room incorrectly.  Maybe one or two team members had misgivings about whether you’d make a good corporate culture fit. Maybe you were the first applicant and your skills and resume are fading from memory while other candidates go through the interview process.  No matter how much you think you aced that interview, follow up is critical to ensuring that success translates into a job offer. Much like sending thank you emails, a follow-up email after a week or so will demonstrate your continued enthusiasm for the position as well as keep your name fresh in the mind of the hiring manager or recruiting contact.  Keep your follow up polite and offer to send any additional information that may be needed. Avoid pestering or annoying follow-up emails timed too close to interview day or sent repeatedly and you’ll look like a candidate who is ready, willing and able to get up and running in the new position.

Keep Sending Out Applications

File this last piece of advice in the “don’t count your chickens before they’re hatched” category.  Regardless of how well your interview went, many things could stand in the way of you landing the job in the end.  Job requirements and staffing needs frequently shift, especially in larger organizations. The position that was available could have been eliminated or filled internally even if you are the perfect candidate.  

To avoid putting all of your eggs in one basket, continue to apply for other available positions while you’re waiting to hear back and following up.  It certainly can’t hurt your chances if you keep looking and you never know when a bigger, better opportunity may be just around the corner. Until you’ve signed the offer letter, keep your irons in the fire to help ensure job-search success.  

Source: Simplyhired

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Serverless has Quickly Become Mission Critical for Mainstream Enterprises

Serverless has Quickly Become Mission Critical for Mainstream Enterprises

Serverless is barely out of diapers, yet mainstream enterprises are already looking to it to help them leapfrog containers.

Serverless is being driven by tech laggards. But whether driven by newbies hoping to get the benefits of containers without figuring out the realities of running them, or by tech hipsters, serverless functions are getting real remarkably fast. As recent survey data from Serverless Inc. suggests, despite the relative novelty of serverless for most enterprises, it's already becoming critical infrastructure.

Driven by laggards

Of course, the data does come from Serverless Inc., which admits that "the majority of people who answered the survey were probably Serverless Framework users." So. the survey didn't necessarily reach the hinterlands of enterprise computing.

Or possibly it did. As the Stackery team told Governor, "serverless is being driven by mainstream enterprises. We see them leapfrogging containers so they can take something off the shelf and move quickly."

Containers are great, but as Governor pointed out, "The problem with container infrastructures is they call for highly skilled developers and operators," which many companies don't have and can't get, as the best developers prefer to work for "cooler" companies.

And so, serverless is driven by tech laggards who may well inhabit the "hinterlands of enterprise computing." Where are these companies using serverless? Well...everywhere:

Mission critical at such a young age

As noted in the above graphic, some 53.2% of respondents said serverless is critical for their job. That's a massive number given how new serverless is. Again, it's a skewed sample, perhaps, but how about this: 24% of those surveyed had virtually no public cloud experience before adopting serverless.

Wut?

Yes, that leaves 76% of those surveyed with lots of experience running applications in the public cloud, but to find nearly a quarter with none? Running cutting-edge serverless functions? That's huge, especially when you consider that 65% of those public cloud newbies say that serverless is critical to the work they do. It's also a testament to serverless being a great opportunity for companies looking to leapfrog the mess of containers.

As for what holds these companies back (experienced cloud consumers or otherwise), it's mostly a matter of operationalizing serverless. As much as serverless does to lower the bar to developer productivity, developers still want better tooling, more knowledge, etc. to feel that they're completely capable of moving forward with confidence. That's a great opportunity for vendors in the space, particularly since serverless is changing the whole definition of an "early adopter." In serverless, the early adopter is...everyone.

Source: Techrepublic

If you’re interested in a career in Cloud Technology call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Healthcare and the Robot Revolution

Healthcare and the Robot Revolution

Although the field of medicine is a thoroughly scientific venture, it’s a field that’s historically been slow to adopt new technology, as the high stakes involved mean small mistakes can have disastrous results

Although the field of medicine is a thoroughly scientific venture, it’s a field that’s historically been slow to adopt new technology, as the high stakes involved mean small mistakes can have disastrous results

However, the benefits of certain technologies mean there is great interested in bringing technological advances to the field, and robotics has begun playing a major role in medicine in recent years. Here are some of the places where you’re likely to see robotics in medicine in the near future.

Surgery

Performing surgery requires precision, and capable surgeons are able to perform remarkable feats. However, human physiology comes with limitations. Perhaps the most famous robot being used in surgery today is da Vinci. High-definition video provides doctors with levels of resolution impossible using traditional surgery techniques, giving surgeons an enhanced ability to see the structures they’re operating on. The arms on da Vinci can perform tasks the human hand and wrist cannot, enabling new types of surgical maneuvers. Da Vinci can also make smaller incisions compared to human surgeons, minimizing infection risk and reducing the appearance of scars. It may still be some time before robots can perform surgery without human intervention, but efforts are underway.

Robotic Assistants

A significant portion of healthcare is ensuring patients receive the attention they need, and having humans check in on patients, especially the elderly, is a significant cost. Robotic assistants are beginning to fill this gap. Already, robots can provide a degree of companionship many elderly people need on a daily basis, helping to reduce feelings on loneliness that can have a profound impact on health. Furthermore, robotic assistants can measure signs of health in individuals and alert medical professionals if attention is needed. Although robots will never replace visits from family or check-ins from medical professionals, they can provide the type of day-to-day care that’s increasingly needed for aging populations.

Servicing Clinics and Hospitals

It takes a significant amount of human labor to keep hospitals and clinics up and running, and these labor costs contribute to rising healthcare expenses. Robots used in other industries are coming to healthcare centers. Instead of paying people to disinfect rooms on a regular basis, robots can instead perform the task. People staying in hospitals need deliveries of food and other items. By investing in robots instead of hiring new employees, health organizations can reduce their operating costs and pass these savings to patients.

Exoskeletons

Mobility is a major elements of healthcare, and helping people enjoy their lives is a major goal of those in the medical field. Wheelchairs have advanced significantly over the years, and modern chairs are more convenient and safer to operate, partially due to robot-like components. Medical exoskeletons, however, have become a focus, with several companies vying to bring products to the market. Exoskeletons allow people to handle environments that aren’t accessible to wheelchair users, and they enable users to blend into society more seamlessly. For some patients, being able to walk is a significant goal, and exoskeletons have the potential to let them achieve this dream.

Improved Manufacturing

Devices manufactured for medical use cost several times more than their non-medical counterpart. A simple scalpel, for example, might cost five times as much if it’s approved for medical use. Part of this is due to the increased precision mandated by medical use, and robots will help meet these high levels of tolerance. The approval process involved with medical devices is expensive as well, and testing is a significant expense. Robots designed to simulate human movement and physiology can help reduce these expenses, making it easier to try out and approve new products. This type of innovation can ensure doctors and patients receive better products, and it can lead to more rapid innovation.

Telemedicine

In-office visits will likely never be supplanted by robotics, but telemedicine has the potential to connect medical experts and patients remotely. Better camera technology is having an impact, and remotely controlled arms and other devices can allow doctors and nurses to perform certain tasks across a network. Telemedicine will allow patients to connect with doctors with extremely narrow specialties, allowing doctors to better treat specific conditions. Telemedicine also cuts back on transportation costs, which add up quickly when patients need to make regular trips to medical facilities.

Robotics haven’t been as fast to enter fields as some futurists projected, but they are having an impact in the medical field. We likely won’t be replacing our regular doctors with robots any time soon, but advances in robotic technology, combined with improved artificial intelligence, means that change is coming to the medical field in a big way.

Source: Terchnative

If you’re interested in a career in Robitics call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

AI can peer pressure you, too

AI can peer pressure you, too

In a study out Wednesday in Science Robotics, researchers wanted to see whether artificially intelligent robots might peer pressure people into complying with the views of an erroneous majority. Adults were able to maintain their opinions, but children were not so tough minded. In the company of the robots, the children, aged between seven and nine, tended to mimic the robots’ answers.

In a study out Wednesday in Science Robotics, researchers wanted to see whether artificially intelligent robots might peer pressure people into complying with the views of an erroneous majority. Adults were able to maintain their opinions, but children were not so tough minded. In the company of the robots, the children, aged between seven and nine, tended to mimic the robots’ answers.

“We've known for a long time that it is hard to resist taking over views and opinions of people around us,” Tony Belpaeme, a professor of robotics at Plymouth University and one of the study’s authors, said in a statement. And while adults might be able to resist those opinions when they come from a non-human, for children, it is not so simple.

“Children can perhaps have more of an affinity with robots than adults,” Belpaeme said. “Which does pose the question: what if robots were to suggest, for example, what products to buy or what to think?”

In a test known as the Asch Paradigm, first developed in the 1950s, a card shows four lines: two are the same length, and two are different. Usually, when asked to select the two matching lines, people perform well on this test. But in the company of others who disagree with them, they flail — often picking lines that do not match one another. (In these experiments, only a quarter of the subjects were able to remain independent, choosing the correct answer despite the views of others.)

With artificial intelligence being used increasingly in the home, workplace and school for both entertainment and therapeutic purposes, the researchers wanted to understand how this kind of peer pressure might translate — especially given evidence that humans sometimes interact with and treat robots like other human beings. They hoped the study would spur further discussion of whether protective measures and regulations should be in place.

 

Source: TheOutline

How AI Can Be Applied To Cyberattacks

How AI Can Be Applied To Cyberattacks

As early as 2013, pioneer companies such as Cylance, Darktrace and Wallarm have released AI-based cybersecurity products. Since then, the number of security startups using some sort of machine learning has grown year after year. These are cyber threat defenders armed with AI, but what about AI-powered attackers?

Nowadays, artificial intelligence is a kind of a de facto standard. One would be hard-pressed to find an industry where AI or machine learning found no applications. AI projects are popping up everywhere -- from law to medicinefarming to the space industry.

Cybersecurity is not an exception. As early as 2013, pioneer companies such as Cylance, Darktrace and Wallarm have released AI-based cybersecurity products. Since then, the number of security startups using some sort of machine learning has grown year after year. These are cyber threat defenders armed with AI, but what about AI-powered attackers?

It would be foolish to assume that attackers and intruders would forgo such an effective tool as AI to make their exploits better and their attacks more intelligent. It’s especially true now when it’s so easy to use so many machine learning technologies out of the box, leveraging open-source frameworks like TensorFlow, Torch or Caffe. Not being an attacker, I can still speculate what these AI-generated exploits might look like, when we can expect them to materialize and how we can protect us from these threats.

 

We got our first glimpse of what AI-powered attacks would look like from the DARPA’s Cyber Grand Challenge -- the world’s first all-machine cyber hacking tournament that happened two years ago in 2016. That contest proved that it was possible to fully automate practical cybersecurity aspects like exploit generation, attack launch and patch generation processes. We can pinpoint this event as the beginning of the era of fully automated cybersecurity.

To understand how machine learning works regarding cyberattacks, we need to understand the attack process a little better by formalizing it. I'll attempt to explain what happens from a technical perspective when we hear about a data breach. All the successful attacks that lead to data breaches can be divided into several stages that should be passed by attackers to make the breach happen:

 vulnerability discovery

 exploitation

 post-exploitation (discovery and exploitation of other vulnerabilities inside)

 data theft

This is my own way to simplify the famous kill chain model. Let’s look at what happens at each stage to understand how the AI can be applied there.

Vulnerability Discovery

An attacker should find some issues inside the system to break it. Primarily, there are two different ways to discover vulnerabilities: 1) check for known issues by known payloads and 2) generate new payloads by fuzzing to discover new issues. The first approach is as simple as following a checklist. The vulnerability tool, in this case, should check all the items one by one. The second one is more interesting. The attack tool tries to generate some unusual behavior like putting some unusual data in request fields to cause an abnormal response from the target service. This is where neural networks really shine. Artificial intelligence, trained by already discovered payloads for existing vulnerabilities, can suggest new payloads to discover new issues with better probability. 

This vulnerability discovery phase, in fact, looks pretty similar to picking a lock. At this phase, a thief would need to find the right pick from a set of different lockpicks. As I showed earlier, AI tools already can generate new types and variants of these lockpicks automatically.

Exploitation

At the exploitation phase, attackers apply all their knowledge and experience to gain access or cause another adverse impact by using a previously discovered vulnerability. This process can be automated by simply coding each particular exploit step by step for well-known issues. But what if the vulnerability was discovered for the first type? In this case, an attacker -- whether it be human or machine -- should find the right way to generate an exploit to penetrate a particular system/application/infrastructure/environment configured in a particular way. AI can help, at this phase, to adapt an exploit for the particular environment faster than a human just because it can generate exploit variants and run them much faster.

According to our lockpicking analogy, this phase is similar to the door opening. A thief would apply the proper lockpick right way to open the door and come inside.

Post-Exploitation

This process is often recursive. After exploiting the first issue and gaining some access because of the exploitation phase, an attacker would go deeper by discovering new issues and, in turn, exploiting them. This happens because any reasonably designed infrastructure is organized into separate isolated layers. By compromising one layer, an attacker will be able to repeat the same discovery -> exploitation -> post exploitation -> data theft phases for the new layer that was not accessible before.

This is the same as a thief in the real world who will find some new locks on safes after they get through the front door.

Data Theft

The paydirt part for attackers is the data-stealing phase of the attack. They are finding and downloading some sweet data like user emails and passwords, credit cards, SSNs, etc. Sometimes it's not so easy to steal a lot of data because of the amount and the number of outbound filters installed inside victim's infrastructure. At the same time, data search and classification are important at this stage as well. And AI is historically good when it comes to searching.

Thieves would find the most valuable things and steal them first -- AI can also help them decide what to steal faster.

Summary

AI exploits are not only able to find new ways to discover vulnerabilities, but they can also identify which data is more important to a breach. And sooner rather than later they will be available to generate new ways to exploit these issues, unlike the present-day situation when they are able to speed up a step-by-step attack scenario defined by humans.

Source: Forbes

 

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Does AI work like a brain?

Does AI work like a brain?

The field of artificial intelligence (AI) is currently exploding, with computers able to perform at near- or above-human level on tasks as diverse as video games, language translation, trivia and facial identification. Like the French exhibit-goers, any observer would be correctly impressed by these results. What might be less clear, however, is howthese results are being achieved. Does modern AI reach these feats by functioning the way that biological brains do, and how can we know? 

In 1739, Parisians flocked to see an exhibition of automata by the French inventor Jacques de Vaucanson performing feats assumed impossible by machines. In addition to human-like flute and drum players, the collection contained a golden duck, standing on a pedestal, quacking and defecating. It was, in fact, a digesting duck. When offered pellets by the exhibitor, it would pick them out of his hand and consume them with a gulp. Later, it would excrete a gritty green waste from its back end, to the amazement of audience members. 

Vaucanson died in 1782 with his reputation as a trailblazer in artificial digestion intact. Sixty years later, the French magician Jean-Eugène Robert-Houdin gained possession of the famous duck and set about repairing it. Taking it apart, however, he realised that the duck had no digestive tract. Rather than breaking down the food, the pellets the duck was fed went into one container, and pre-loaded green-dyed breadcrumbs came out of another. 

The field of artificial intelligence (AI) is currently exploding, with computers able to perform at near- or above-human level on tasks as diverse as video games, language translation, trivia and facial identification. Like the French exhibit-goers, any observer would be correctly impressed by these results. What might be less clear, however, is howthese results are being achieved. Does modern AI reach these feats by functioning the way that biological brains do, and how can we know? 

In the realm of replication, definitions are important. An intuitive response to hearing about Vaucanson’s cheat is not to say that the duck is doing digestion differently but rather that it’s not doing digestion at all. But a similar trend appears in AI. Checkers? Chess? Go? All were considered formidable tests of intelligence until they were solved by increasingly more complex algorithms. Learning how a magic trick works makes it no longer magic, and discovering how a test of intelligence can be solved makes it no longer a test of intelligence. 

So let’s look to a well-defined task: identifying objects in an image. Our ability to recognise, for example, a school bus, feels simple and immediate. But given the infinite combinations of individual school buses, lighting conditions and angles from which they can be viewed, turning the information that enters our retina into an object label is an incredibly complex task – one out of reach for computers for decades. In recent years, however, computers have come to identify certain objects with up to 95 per cent accuracy, higher than the average individual human.

Like many areas of modern AI, the success of computer vision can be attributed to artificial neural networks. As their name suggests, these algorithms are inspired by how the brain works. They use as their base unit a simple formula meant to replicate what a neuron does. This formula takes in a set of numbers as inputs, multiplies them by another set of numbers (the ‘weights’, which determine how much influence a given input has) and sums them all up. That sum determines how active the artificial neuron is, in the same way that a real neuron’s activity is determined by the activity of other neurons that connect to it. Modern artificial neural networks gain abilities by connecting such units together and learning the right weight for each.

The networks used for visual object recognition were inspired by the mammalian visual system, a structure whose basic components were discovered in cats nearly 60 years ago. The first important component of the brain’s visual system is its spatial map: neurons are active only when something is in their preferred spatial location, and different neurons have different preferred locations. Different neurons also tend to respond to different types of objects. In brain areas closer to the retina, neurons respond to simple dots and lines. As the signal gets processed through more and more brain areas, neurons start to prefer more complex objects such as clocks, houses and faces. 

The first of these properties – the spatial map – is replicated in artificial networks by constraining the inputs that an artificial neuron can get. For example, a neuron in the first layer of a network might receive input only from the top left corner of an image. A neuron in the second layer gets input only from those top-left-corner neurons in the first layer, and so on. 

The second property – representing increasingly complex objects – comes from stacking layers in a ‘deep’ network. Neurons in the first layer respond to simple patterns, while those in the second layer – getting input from those in the first – respond to more complex patterns, and so on. 

These networks clearly aren’t cheating in the way that the digesting duck was. But does all this biological inspiration mean that they work like the brain? One way to approach this question is to look more closely at their performance. To this end, scientists are studying‘adversarial examples’ – real images that programmers alter so that the machine makes a mistake. Very small tweaks to images can be catastrophic: changing a few pixels on an image of a teapot, for example, can make the network label it an ostrich. It’s a mistake a human would never make, and suggests that something about these networks is functioning differently from the human brain. 

 

Studying networks this way, however, is akin to the early days of psychology. Measuring only environment and behaviour – in other words, input and output – is limited without direct measurements of the brain connecting them. But neural-network algorithms are frequently criticised (especially among watchdog groups concerned about their widespread use in the real world) for being impenetrable black boxes. To overcome the limitations of this techno-behaviourism, we need a way to understand these networks and compare them with the brain. 

An ever-growing population of scientists is tackling this problem. In one approach, researchers presented the same images to a monkey and to an artificial network. They found that the activity of the real neurons could be predicted by the activity of the artificial ones, with deeper layers in the network more similar to later areas of the visual system. But, while these predictions are better than those made by other models, they are still not 100 per cent accurate. This is leading researchers to explore what other biological details can be added to the models to make them more similar to the brain. 

There are limits to this approach. At a recent conference for neuroscientists and AI researchers, Yann LeCun – director of AI research at Facebook, and professor of computer science at the New York University – warned the audience not to become ‘hypnotised by the details of the brain’, implying that not all of what the brain does necessarily needs to be replicated for intelligent behaviour to be achieved. 

But the question of what counts as a mere detail, like the question of what is needed for true digestion, is an open one. For example, by training artificial networks to be more ‘biological’, researchers have found computational purpose in, for example, the physical anatomy of neurons. Some correspondence between AI and the brain is necessary for these biological insights to be of value. Otherwise, the study of neurons would be only as useful for AI as wing-flapping is for modern airplanes. 

In 2000, the Belgian conceptual artist Wim Delvoye unveiled Cloaca at a museum in Belgium. Over eight years of research on human digestion, he created the device – consisting of a blender, tubes, pumps and various jars of acids and enzymes – that successfully turns food into faeces. The machine is a true feat of engineering, a testament to our powers of replicating the natural world. Its purpose might be more questionable. One art critic was left with the thought: ‘There is enough dung as it is. Why make more?’ Intelligence doesn’t face this problem.

Source: Aeon

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

How to avoid the top six most common job interview mistakes

How to avoid the top six most common job interview mistakes

Don’t bring any angst to the interview, dress formally and give clear, concise answers

1. Negativity

Whether your last boss was a bullying dictator or you’re full of post-university angst, do not bring any negativity to the interview. When faced with the challenging prospect of discussing previous employment, graduates should be ready to add a positive spin on even the most reasonable of complaints. While we’re on the subject, keep your integrity intact and never lie. Being able to trust your employees is pivotal, so being caught out in an interview can mean an instant rejection.

2. Inappropriate interview attire

While it can depend on the job sector, the general rule of thumb is formal clothing. First impressions are vital and demonstrate how seriously you are taking the opportunity. If a graduate turns up in jeans and flip flops, they shouldn’t expect a warm welcome. The best advice is to always take a conservative approach and be well groomed - polished shoes and irons at the ready. You need to fit into a commercial, professional environment which often means you need to be willing to sacrifice youthful fashion for the job.

3. Talking too much or too little

Your answers should be like concise mini-essays with a clear beginning, middle and end. Too short and it looks like you have little to say, too lengthy and you’ve probably babbled and missed the point. Be composed, think before you answer and employ structure.

4. Not enough research

This can either be a lack of research into the company and role, or not enough preparation for tricky interview questions. Although nerves come with the territory, if a graduate is both anxious and underprepared, they won’t come across well. You therefore need to go the extra mile when carrying out any research. Candidates should memorise a few key background facts, find out more about who will be interviewing them, such as finding them on LinkedIn or Twitter, and familiarise themselves with the company’s market and wider online presence – not just their own website.

5. Lack of questions

An interview isn’t just about why a graduate’s past experiences and skills can be applied to the particular role. It’s also a test of their interest in the position. This demonstrates your enthusiasm and as a result, strengthens your credibility as a candidate.

Be careful: asking questions about things you should already know illustrates a lack of research.

But be careful: asking questions about things you should already know illustrates a lack of research. Perhaps you could ask how a current affairs issue might affect their business. This shows you’ve given the company serious thought. Prepare a list of questions to ask so you don’t forget them. Where possible, relate them to your interviewer and their experiences. A great example would be: “What do you like most about working here?” There is also an opportunity to seek feedback. Asking the interviewer if they have any concerns about you can allow you to overcome any potential objections – but make sure you accept these concerns gracefully.

6. How can graduates make a good impression during an interview?

First impressions are key: research has found most interviews are decided in the first two to three minutes. Make sure you practise your strong and professional handshake accompanied by a gracious smile and confident body language. But don’t let the confidence end there. From the moment you press the buzzer, you should come across as professional and dynamic. Making confident small talk with both the receptionist and the hiring managers will allow you to expose a little of your personality without the pressure of answering: “Where do you see yourself in five years?”

Confidence (not arrogance) is important. Fight through the nerves and give your potential new employer a good impression of the real you. Essentially you have nothing to lose, so just go for it. Also, interviewers generally don’t want to catch candidates out – they want you to do well. Recruitment is a business challenge and employers go into interviews hoping, praying even, that the candidate is the solution.

Self-analysis is another good area to add to your pile of research and something that candidates regularly forget. Look at the job specification and consider what the client might be looking for. Now think about your own achievements and all the challenges you have overcome. Finally, try and match the two together. Without self-reflection, you may forget key life experiences that you could have applied to those tricky competency questions.

Source: The Guardian

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Are Microservices about to Revolutionize the Internet of Things?

Are Microservices about to Revolutionize the Internet of Things?

Microservices have helped reinvent software development, and now a new startup says it's going to combine them with edge computing to transform the Internet of Things.

Along with the rise of cloud computing, Agile, and DevOps, the increasing use of microservices has profoundly affected how enterprises develop software. Now, at least one Silicon Valley startup hopes the combination of microservices and edge computing is going to drive a similar re-think of the Internet of Things (IoT) and create a whole new software ecosystem.

Frankly, that seems like a stretch to me, but you can’t argue with the importance of microservices to modern software development. To learn more, I traded emails with Said Ouissal, founder and CEO of ZEDEDA, which is all about “deploying and running real-time edge apps at hyperscale” using IoT devices.

For context, according to the company, there’s an “exodus of data, compute power and software applications from the cloud to a diverse network of smart edge devices, many of which are too small to run large applications. Thus the need for microservices on these edge devices to accomplish tasks ranging from optimizing artificial intelligence in a self-driving car to enhancing data gathering on a smart pump on an oil rig. ZEDEDA is trying to build a platform for integrating these services across diverse devices and projects."

 

Microservices in the IoT

The concept of microservices in the IoT is not so different from microservice software architectures, Ouissal wrote me. It’s still about the “disaggregation of a general application across the required services that are operating together to perform the application function,” so developers can “apply appropriate compute, storage, and network capability to particular services without impacting every other microservice.”

Similarly, an IoT or edge microservices application, he continued, “would be able to be orchestrated to use the appropriate edge hardware to run the appropriate function.”

That’s the concept, but the reality is still being built. Some companies are developing IoT and edge applications that use microservices architecture as an ingredient to a total solution, Ouissal noted. But while microservice architectures are being used in production IoT and edge computing environments, he added, “it’s not commonly discussed because IoT solutions are focused on business outcomes,” not the technical details, so exact market size remains unknown.

Geography is destiny

Unlike on-premise data centers or the cloud, IoT devices on the edge are limited by geography, so microservices architectures bring specific advantages, including a smaller code footprint and faster boot-up speeds. Microservices can also make it easier to share and reuse scarce edge resources in a virtualized, cloud-native fashion, Ouissal wrote.

Individual edge IoT devices typically need to be extremely power efficient and resource efficient, with the smallest possible memory footprint and consuming minimal CPU cycles. Microservices promise to help make that possible.

“Microservices in an edge IoT environment can also be reused by multiple applications that are running in a virtualized edge,” Ouissal explained. “Video surveillance systems and a facial recognition system running at the edge could both use the microservices on a video camera, for example.”

Microservices also bring distinct security advantages to IoT and edge computing, Ouissal claimed. Microservices can be designed to minimize their attack surface by running only specific functions and running them only when needed, so fewer unused functions remain “live” and therefore attackable. Microservices can also provide a higher level of isolation for edge and IoT applications: In the camera function described above, hacking the video streaming microservice on one app would not affect other streaming services, the app, or any other system.

A vision of an IoT microservices ecosystem

Ouissal sees the the emerging IoT microservices ecosystem looking remarkably similar to today’s DevOps and cloud-native environments. He predicts an agile, “edge DevOps” approach, with continual development, deployment, integration, testing, and monitoring being done on single function. In fact, he envisions that a proper edge services platform could open up edge resources to other companies and “rent” compute power to microservices as needed. His goal is an “edge economy” where apps (and their essential microservices) can run anywhere, including on edge assets owned by other entities.

That’s a bold vision, with a lot of moving parts having to come together to make it a reality. But when you look at the success of microservices in advancing the world of modern software, a microservices of IoT doesn’t sound like a bad idea at all.

Source: Networkworld

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

The Combination of Human and Artificial Intelligence will Define Humanity’s Future

The Combination of Human and Artificial Intelligence will Define Humanity’s Future

Human Intelligence is unique among this variety of intelligence because of its unparalleled ability to design, modify and build new forms of intelligence. HI is what defines us as humans and our relationship with everything on earth. Now, through the combination of HI and AI, we are at the brink of intelligence enhancement, which could be the most consequential technological development of our time, and in history.

Through the past few decades of summer blockbuster movies and Silicon Valley products, artificial intelligence (AI) has become increasingly familiar and sexy, and imbued with a perversely dystopian allure.

What’s talked about less, and has also been dwarfed in attention and resources, is human intelligence (HI).

In its varied forms — from the mysterious brains of octopuses and the swarm-minds of ants to Go-playing deep learning machines and driverless-car autopilots — intelligence is the most powerful and precious resource in existence. Our own minds are the most familiar examples of a phenomenon characterized by a great deal of diversity.

Yet, HI is unique among this variety of intelligence because of its unparalleled ability to design, modify and build new forms of intelligence. HI is what defines us as humans and our relationship with everything on earth. Now, through the combination of HI and AI, we are at the brink of intelligence enhancement, which could be the most consequential technological development of our time, and in history.

The master tool

Intelligence, in its varied forms, powers every opportunity we pursue and every problem we seek to solve. It sits upstream from everything else. It is at once the master tool and the master of all tools. It is not only the most general means to do things, it is also the meaning-making force that decides what is worth doing.

Intelligence is what allows us to create forms of governance, cure disease, create art and music, discover, dream and love. Intelligence is also what decides that these things, rather than other things, are worth doing, by translating discoveries into meanings, experiences into values and values into decisions.

The evolution of human tools, from rocks to AI, can be seen as a trajectory of increasingly powerful effort arbitrage, where we exploit our comparative advantage relative to our tools to do things better, and do more new things. Along this trajectory, tools that embody significant levels of intelligence are our most powerful yet.

In this pursuit of effort arbitrage, the smallest of intelligence advancements has the power to yield enormous gains for humans, individual and collective. A seemingly simple change 2.5 million years ago — using stone tools to butcher animals — led early hominids down the path to becoming modern humans.

From that modest starting point, throughout human history, we created tools that increased our individual and collective intelligence and became extensions of our natural selves. We started with crude functional tools such as hammers and axes. Over just a few tens of thousands of years, we progressed to more intelligent tools, such as thermostats, and governance technologies based on rule-of-law rather than despotism.

With each advance, we happily relinquished a small part of our agency for known pre-programmed outcomes. Our tools could begin doing bigger and bigger things on our behalf, freeing us up for other, more desired tasks.

This progression has continued. As we’ve become more familiar and comfortable with our tools doing things for us, we’ve eagerly traded more of our agency for the anticipated gains, even when we’re unfamiliar with the choices and assumptions built into a particular instance of that trade-off. In general, our risk-taking has paid off, and the resulting real gains have generally far outstripped losses.

For example, Amazon’s recommendation algorithm is one of the most powerful forces in the world, determining which books are read, what ideas are listened to and what we learn — yet how those recommendation decisions are being made is unknown to most of us.

The gains of discovering countless new reading choices are clear. The anticipated losses, including reduced human connection and loss of privacy, either didn’t play out as expected (for example, through various online media, we can now connect with far more people through books, beyond the local indie bookstore owner) or are ones we readily make (giving up some privacy around our reading habits in return for better recommendations).

A new partnership for humanity

We’re at an interesting transition point where we are moving from using our tools as passive extensions of ourselves, to working with them as active partners. An axe or a hammer is a passive extension of a hand, but a drone forms a distributed intelligence along with its operator, and is closer to a dog or horse than a device. Such tools can interact with us in ways never before possible, such as working with us in a choreographed dance for a talent competition or helping us script a novel or new sci-fi movie.

Our tools are now actors unto themselves, and their future is in our hands. Think about the evolution of the car: from horse and carriage to Model-T, from cruise control to adaptive cruise control, and now to driverless cars.

Engineers are now programming cars using subtle ethics models to determine, in situations where an accident is unavoidable, whether to hit pedestrians or veer off the road and jeopardize the driver’s life.

The conclusions such cars reach in real situations might well be very different from the decisions you or I might make if we were in the driver’s seat, but with hindsight we might judge them to be much better, even if they initially seem alien to us. Ideally, such technologically evolved decision-making abilities can flourish alongside evolving HI, to rethink assumptions, reframe possibilities and explore new territories.

We’ve already seen chess evolve to a new kind of game where young champions like Magnus Carlsen have adopted styles of play that take advantage of AI chess engines. With early examples of unenhanced humans and drones dancing together, it is already obvious that humans and AIs will be able to form a dizzying variety of combinations to create new kinds of art, science, wealth and meaning. What could we do if the humans in the picture were enhanced in powerful ways? What might happen if every human had perfect memory, for instance?

In short, we are poised for an explosive, generative epoch of massively increased human capability through a Cambrian explosion of possibilities represented by the simple equation: HI+AI. When HI combines with AI, we will have the most significant advancement to our capabilities of thought, creativity and intelligence that we will have ever had in history.

While we’re starting with HI+AI in health diagnosis, transportation coordination, art and music, our partnership is rapidly extending into co-creation of technology, governance and relationships, and everywhere else our HI+AI imagination takes us.

The biggest bottleneck in opening up this powerful new future is that we humans are currently highly limited in how we can participate in these possibilities. Our connection with our new creations of intelligence is limited by screens, keyboards, gestural interfaces and voice commands — constrained input/output modalities. We have very little access to our own brains, limiting our ability to co-evolve with silicon-based machines in powerful ways.

Relative to the ease and speed with which we can make progress on the development of AI, HI, speaking solely of our native biological abilities, is currently a landlocked island of intelligence potential. Unlocking the untapped capabilities of the human brain, and connecting them to these new capabilities, is the greatest challenge and opportunity today.

The single most powerful avenue for achieving this unlocking today is neuroprosthetics. In recent years, research labs around the world have made enormous strides in understanding how the brain works, how to connect it to outside sources and how we might tap more deeply into its potential. The most immediate need for these devices is apparent in the growing number of people living longer lives while suffering from neurodegenerative disorders. These devices — by directly extending HI, including our memory and other cognitive capabilities — could lead to unprecedented longevity of the mind and body. (Full disclosure: I’ve started a company in this arena.)

There are other paths to improved HI including genomics and pharmacological interventions. But these have one severe limitation, their inability to extend the brain’s ability to communicate with our tools of intelligence (AI).

To truly realize the potential of HI+AI, we need to increase the capacity of people to take in, process and use information, by orders of magnitude. For this, neuroprosthetics are the most promising avenue to meet this challenge.

A new narrative

From the writings of Isaac Asimov to Terminator and Doctor Who, we have seen visions of future intelligence that have influenced how we all imagine our machine-filled future. These visions have sensitized society to the downsides and risks of potential future machine intelligence. As with all new technologies, losses are much easier to imagine than gains.

It is certainly true that with every new technology we create, new risks emerge that need thoughtful consideration and wise action. Medical advances that saved lives also made germ warfare possible; chemical engineering led to fertilizers and increased food production but also to chemical warfare. Nuclear fission created a new source of energy but also led to nuclear bombs.

As we embark on the greatest human expedition yet, now is the time for a discussion about HI+AI. But rather than letting risk-anchored scaremongering drive the discussion, let’s start with the promise of HI+AI; the pictures we paint depend upon the brushes we use.

The narratives we create for the future of HI+AI matter because they create blueprints of action that contend for our decision-making, consciously and subconsciously. Adopting a fear-based narrative as our primary frame of reference is starting to limit the imagination, curiosity and exploratory instincts that have always been at the core of being human.

An epic adventure

We are all alive at a time when we are gaining access to unprecedented powers of creation. Using our natural intelligence and the external extensions of intelligence we’ve progressively built over the last millennium, we have now developed tools of creation such as genomics, synthetic biology and robotics that literally allow us to program our existence in any way we can imagine. We have progressed from players to makers of the game.

It’s something that I feel immensely grateful for and which has me jumping out of bed in the mornings. We’re living a story of epic proportions and the future is ours to seize.

This is precisely why HI is the most important thing we could possibly be working on right now. At a time when the greatest opportunities in history are before us, we shouldn’t become the biggest limiting factor in our own stories.

Source: TechCrunch

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

The Role of Trust in Human-Robot Interaction

The Role of Trust in Human-Robot Interaction

As robots become increasingly common in a wide variety of domains—from military and scientific applications to entertainment and home use—there is an increasing need to define and assess the trust humans have when interacting with robots. In human interaction with robots and automation, previous work has discovered that humans often have a tendency to either overuse automation, especially in cases of high workload, or underuse automation, both of which can make negative outcomes more likely. Frthermore, this is not limited to naive users, but experienced ones as well. Robotics brings a new dimension to previous work in trust in automation, as they are envisioned by many to work as teammates with their operators in increasingly complex tasks. In this chapter, our goal is to highlight previous work in trust in automation and human-robot interaction and draw conclusions and recommendations based on the existing literature. We believe that, while significant progress has been made in recent years, especially in quantifying and modeling trust, there are still several places where more investigation is needed.

As robots become increasingly common in a wide variety of domains—from military and scientific applications to entertainment and home use—there is an increasing need to define and assess the trust humans have when interacting with robots. In human interaction with robots and automation, previous work has discovered that humans often have a tendency to either overuse automation, especially in cases of high workload, or underuse automation, both of which can make negative outcomes more likely. Frthermore, this is not limited to naive users, but experienced ones as well. Robotics brings a new dimension to previous work in trust in automation, as they are envisioned by many to work as teammates with their operators in increasingly complex tasks. In this chapter, our goal is to highlight previous work in trust in automation and human-robot interaction and draw conclusions and recommendations based on the existing literature. We believe that, while significant progress has been made in recent years, especially in quantifying and modeling trust, there are still several places where more investigation is needed.

Robots and other complex autonomous systems offer potential benefits through assisting humans in accomplishing their tasks. These beneficial effects, however, may not be realized due to maladaptive forms of interaction. While robots are only now being fielded in appreciable numbers, a substantial body of experience and research already exists characterizing human interactions with more conventional forms of automation in aviation and process industries.

In human interaction with automation, it has been observed that the human may fail to use the system when it would be advantageous to do so. This has been called disuse (underutilization or under-reliance) of the automation [97]. People also have been observed to fail to monitor automation properly (e.g. turning off alarms) when automation is in use, or they accept the automation’s recommendations and actions when inappropriate [7197]. This has been called misuse, complacency, or over-reliance. Disuse can decrease automation benefits and lead to accidents if, for instance, safety systems and alarms are not consulted when needed. Another maladaptive attitude is automation bias  [33557788112], a user tendency to ascribe greater power and authority to automated decision aids than to other sources of advice (e.g. humans). When the decision aid’s recommendations are incorrect, automation bias may have dire consequences [2788789] (e.g. errors of omission , where the user does not respond to a critical situation, or errors of commission, where the user does not analyze all available information but follows the advice of the automation).

Both naïve and expert users show these tendencies. In [128], it was found that skilled subject matter experts had misplaced trust in the accuracy of diagnostic expert systems. (see also [127]). Additionally the Aviation Safety Reporting System contains many reports from pilots that link their failure to monitor to excessive trust in automated systems such as autopilots or FMS [90119]. On the other hand, when corporate policy or federal regulations mandate the use of automation that is not trusted, operators may “creatively disable” the device [113]. In other words: disuse the automation.

Studies have shown [6492] that trust towards automation affects reliance (i.e. people tend to rely on automation they trust and not use automation they do not trust). For example, trust has frequently been cited [5693] as a contributor to human decisions about monitoring and using automation. Indeed, within the literature on trust in automation, complacency is conceptualized interchangeably as the overuse of automation, the failure to monitor automation, and lack of vigilance [66796]. For optimal performance of a human-automation system, human trust in automation should be well-calibrated. Both disuse and misuse of the automation has resulted from improper calibration of trust , which has also led to accidents [5197].

In [58], trust is conceived to be an “attitude that an agent (automation or another person) will help achieve an individual’s goals in a situation characterized by uncertainty and vulnerability.” A majority of research in trust in automation has focused on the relation between automation reliability and operator usage, often without measuring the intervening variable, trust. The utility of introducing an intervening variable between automation performance and operator usage, however, lies in the ability to make more precise or accurate predictions with the intervening variable than without it. This requires that trust in automation be influenced by factors in addition to automation reliability/performance. The three dimensional (Purpose, Process, and Performance) model proposed by Lee and See [58], for example, presumes that trust (and indirectly, propensity to use) is influenced by a person’s knowledge of what the automation is supposed to do (purpose), how it functions (process), and its actual performance. While such models seem plausible, support for the contribution of factors other than performance has typically been limited to correlation between questionnaire responses and automation use. Despite multiple studies of trust in automation, the conceptualization of trust and how it can be reliably modeled and measured is still a challenging problem.

In contrast to automation where system behavior has been pre-programmed and the system performance is limited to the specific actions it has been designed to perform, autonomous systems/robots have been defined as having intelligence-based capabilities that would allow them to have a degree of self governance, which enables them to respond to situations that were not pre-programmed or anticipated in the design. Therefore, the role of trust in interactions between humans and robots is more complex and difficult to understand.

In this chapter, we present the conceptual underpinnings of trust in Sect. 8.2, and then discuss models of, and the factors that affect, trust in automation in Sects. 8.3 and 8.4, respectively. Next, we will discuss instruments for measuring trust in Sect. 8.5, before moving on to trust in the context of human-robot interaction (HRI) in Sect. 8.6 both in how humans influence robots, and vice versa. We conclude in Sect. 8.7 with open questions and areas of future work.

 

Source: The Role of Trust in Human-Robot Interaction

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Quantum algorithm could help AI think faster

Quantum algorithm could help AI think faster

One of the ways that computers think is by analysing relationships within large sets of data. An international team has shown that quantum computers can do one such analysis faster than classical computers for a wider array of data types than was previously expected.

One of the ways that computers think is by analysing relationships within large sets of data. An international team has shown that quantum computers can do one such analysis faster than classical computers for a wider array of data types than was previously expected.

 

The team's proposed quantum linear system  is published in Physical Review Letters. In the future, it could help crunch numbers on problems as varied as commodities pricing, social networks and chemical structures.

"The previous quantum algorithm of this kind applied to a very specific type of problem. We need an upgrade if we want to achieve a quantum speed-up for other data," says Zhikuan Zhao, corresponding author on the work.

The first quantum linear system algorithm was proposed in 2009 by a different group of researchers. That algorithm kick-started research into quantum forms of machine learning, or artificial intelligence.

A linear system algorithm works on a large  of data. For example, a trader might be trying to predict the future price of goods. The matrix may capture historical data about price movements over time and data about features that could be influencing these prices, such as currency exchange rates. The algorithm calculates how strongly each feature is correlated with another by 'inverting' the matrix. This information can then be used to extrapolate into the future.

"There is a lot of computation involved in analysing the matrix. When it gets beyond say 10,000 by 10,000 entries, it becomes hard for classical computers," explains Zhao. This is because the number of computational steps goes up rapidly with the number of elements in the matrix: every doubling of the matrix size increases the length of the calculation eight-fold.

The 2009 algorithm could cope better with bigger matrices, but only if their data is sparse. In these cases, there are limited relationships among the elements, which is often not true of real-world data. Zhao, Prakash and Wossnig present a  that is faster than both the classical and the previous quantum versions, without restrictions on the kind of data it crunches.

As a rough guide, for a 10,000 square matrix, the classical algorithm would take on the order of a trillion computational steps, the first quantum algorithm some tens of thousands of steps and the new quantum algorithm just hundreds of steps. The algorithm relies on a technique known as quantum singular value estimation.

There have been a few proof-of-principle demonstrations of the earlier quantum linear system algorithm on small-scale quantum computers. Zhao and his colleagues hope to work with an experimental group to run a proof-of-principle demonstration of their algorithm, too. They also want to do a full analysis of the effort required to implement the algorithm, checking what overhead costs there may be.

To show a real quantum advantage over the classical algorithms will need bigger quantum computers. Zhao estimates that "We're maybe looking at three to five years in the future when we can actually use the hardware built by the experimentalists to do meaningful  computation with application in artificial intelligence."


Source: Phys.org

 

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

There Are Over 1,000 Alternatives to Bitcoin You’ve Never Heard Of

There Are Over 1,000 Alternatives to Bitcoin You’ve Never Heard Of

Bitcoin gets all the attention, especially since it recently rocketed towards $20,000. But many other cryptocurrencies exist, and more are being created at an accelerating rate. A quick look at coinmarketcap.com shows over 1,400 alternatives to Bitcoin (as of this writing), with a combined value climbing towards $1 trillion. So if Bitcoin is so amazing, why do these alternatives exist? What makes them different?

Bitcoin gets all the attention, especially since it recently rocketed towards $20,000. But many other cryptocurrencies exist, and more are being created at an accelerating rate. A quick look at coinmarketcap.com shows over 1,400 alternatives to Bitcoin (as of this writing), with a combined value climbing towards $1 trillion. So if Bitcoin is so amazing, why do these alternatives exist? What makes them different?

The easy answer is that many are simply copycats trying to piggyback on Bitcoin’s success. However, a handful have made key improvements on some of Bitcoin’s drawbacks, while others are fundamentally different, allowing them to perform different functions. The far more complicated—and fascinating—answer lies in the nitty-gritty details of blockchain, encryption, and mining.

To understand these other cryptocurrencies, Bitcoin’s shortcomings need to first be understood, as the other currencies aim to pick up where Bitcoin falls short.

The Problems With Bitcoin

Bitcoin’s block size is only 1 MB, drastically limiting the number of transactions each block can hold. With the pre-programmed time limit of 10 minutes per block being added, this gives a theoretical maximum of 7 transactions per second. Compared with Visa and PayPal’s significantly higher transactions per second, for example, Bitcoin can’t compete, and with the popularity of Bitcoin soaring, the problem is going to get worse. As of now, around 200,000 transactions are backlogged.

Bitcoin’s scalability problem is also likely to make mining more difficult and increase mining fees. Adding blocks to the blockchain requires doing an alarming amount of computation to find the solution to the SHA-256 cryptographic hash algorithm, for which the miner is rewarded with a geometrically decreasing predetermined amount of Bitcoins, currently at 12.5 per block.

However, each new block takes more computing than the last, meaning it becomes more difficult for less reward. To help offset this, miners can charge fees, and with it becoming more difficult to make a profit, the fees are only going to go up.

Because of the computing power needed to process each block, it has been estimated that each transaction requires enough electricity to power the average home for nine days. If this is true, and if Bitcoin continues to grow at the same rate, some have predicted it will reach an unsustainable level within a decade.

Furthermore, Bitcoin’s blockchain has only one purpose: to handle Bitcoin. Given the complexity of the system, it could be doing much more. Also, Bitcoin is not entirely anonymous. For any given Bitcoin address, the transactions and the balance can be seen, as they are public and stored permanently on the network. The details of the owner can be revealed during a purchase.

Altcoins

Ignoring the copycats, several Bitcoin alternatives—or altcoins—have gained popularity. Some of these are a result of changing the Bitcoin code, which is open-source, effectively creating a hard fork in the blockchain and a new cryptocurrency. Others have their own native blockchains.

Hard forks include Bitcoin Cash, Bitcoin Classic, and Bitcoin XT, all three of which increased the block size. XT changed the block size to 8 MB, allowing for up to 24 transactions per second, whereas Classic only increased it to 2 MB. While these two are now terminated due to a lack of community support, Cash is still going. Its major change was to do away with Segregated Witness, which reduces the size of a transaction by removing the signature data, allowing for more transactions per block.

Another Bitcoin derivative is Litecoin. The major changes from Bitcoin are that the creator, Charlie Lee, reduced the block generation time from 10 minutes to 2.5, and instead of using SHA-256, it uses scrypt, which is considered by some to be a more efficient hashing algorithm.

As far as native blockchains go, there are a lot of altcoins.

One of the most popular—at least by market capitalization—is Ethereum. The key element that distinguishes Ethereum from Bitcoin is that its language is Turing-complete, meaning it can be programmed for just about anything, such as smart contracts, not just its currency, Ether. For example, the United Nations has adopted it to transfer vouchers for food aid to refugees, keep track of carbon outputs, etc.

Monero has solved Bitcoin’s privacy issue. It uses ring signatures, which allow for information about the sender to hide among other pieces of data, effectively creating stealth addresses. This makes the Monero blockchain opaque, not transparent like other blockchains. However, programmers have included a “spend” key and a “view” key, which allow for optional transparency if agreed upon for specific transactions.

Dash has avoided Bitcoin’s logjam by splitting the network into two tiers. The first handles block generation done by miners, much like Bitcoin, but the second tier contains masternodes. These handle the new services of PrivateSend and InstantSend, and they add a level of privacy and speed not seen in other blockchains. These transactions are confirmed by a consensus of the masternodes, thus removing them from the computing and time-intensive project of block generation.

IOTA just did away with blocks altogether. It stands for the Internet of Things Application and depends on users to validate transactions instead of relying on miners and their souped-up computers. As a user conducts a transaction, he/she is required to validate two previous transactions, so the rate of validation will always scale with the amount of transactions.

On the other hand, Ripple, which is now one of the top cryptocurrencies by market capitalization, has taken a completely different approach. While other cryptocurrencies are designed to replace the traditional banking system, Ripple attempts to strengthen it by facilitating bank transfers. That is, bank transfers depend on systems like SWIFT, which is expensive and time-consuming, but Ripple’s blockchain can perform the same functions far more efficiently. Over 100 major banking institutions are signed up to implement it.

Bitcoin isn’t going anywhere anytime soon, but budding crypto-enthusiasts should give heed to these competitors and many others, as they may one day replace it as the dominant cryptocurrency.

 

Source: Singularity Hub

 

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Jobs are changing. But two skills will always be in demand

Jobs are changing. But two skills will always be in demand

Fifty years ago, work in developed countries was full of relative certainties. Aside from the periodic recession, most nations were at or near full employment.

Rapid productivity growth was underpinning an improvement in living standards.

A university degree was a meal ticket to a high-paying, secure job as a professional. And for workers with a high school diploma, jobs on manufacturing assembly lines offered a pathway to middle-class prosperity and upward mobility.

Now we live in a much less certain world.

 

So, qhat skills will always be in demand?

Fifty years ago, work in developed countries was full of relative certainties. Aside from the periodic recession, most nations were at or near full employment.

Rapid productivity growth was underpinning an improvement in living standards.

A university degree was a meal ticket to a high-paying, secure job as a professional. And for workers with a high school diploma, jobs on manufacturing assembly lines offered a pathway to middle-class prosperity and upward mobility.

Now we live in a much less certain world.

In many countries, recovery from the latest recession has been gradual and protracted, with unemployment and underemployment coming down only slowly.

Global productivity growth has decelerated sharply, as has pay growth. Cutbacks of private sector benefits and the government safety net are forcing workers to bear more risk than they did in the past.

And while their economic impact has thus far been muted, automation and artificial intelligence raise the spectre of mass displacement of workers.

Performing under pressure

So what are workers to do?

We often hear that workers will have to plan ahead, engage in continuous retraining to upskill themselves, and expect to radically pivot multiple times throughout their careers.

That’s a lot of pressure to lay on a person.

It’s hard to know what types of skills are most important to learn, or how to best position yourself to succeed in the face of changing economic times.

 

Your skills are dynamic

Today the World Economic Forum releases its 2017 Human Capital Report, which evaluates countries on how well they’ve equipped their workforce with the knowledge and skills needed to create value – and be successful – in the global economic system.

At LinkedIn, our vision is to create economic opportunity for every member of the global workforce. That’s why we’ve partnered with the World Economic Forum to contribute to the creation of the 2017 Human Capital Report.

One of the unique advantages of LinkedIn data is the way it can be used to analyse the labour market in an unprecedentedly granular way. We can break down human capital into its most fundamental and critical component unit: skills.

We track the supply and demand of 50,000 distinct skills as provided by our members. This allows us to identify geographically where there is a shortage of particular skills, or where they are in surplus. It allows us to identify which skills are emerging, or growing rapidly, or are persistent over time, or shrinking in popularity.

We can identify the “skills genome” – the unique skills profile – of a city, a job function, or an industry. These types of insights make it possible to advise on which skills are needed when the economy next changes gears.

Our research in this year’s Human Capital Report explores the skills genomes of different university degrees over time.

There are certain skills commonly held by all types of college majors; there are other specialty skills that are unique to specific fields.

So, which skills should you learn?

We found that, across diverse fields of study, there are certain core, cross-functional skills that underpin a career.

These include 1) interpersonal skills, like leadership and customer service, and 2) basic technology skills, like knowing how to use word processing software and manipulate spreadsheets.

Having a strong base in these cross-functional skills is important across industries and job titles – and also gives people the capacity to pivot careers when needed.

Retraining becomes a lot easier when you need to learn just one or two new things, rather than an entire new field of knowledge.

While cross-functional skills are versatile and likely to stand the test of time, they aren’t necessarily the ones that will launch you into a lucrative career off the bat.

Indeed, our data shows that younger generations tend to study more specialized fields than their predecessors, and today’s travel and tourism or international studies majors have more niche and specialized knowledge bases than, say, the history major of yore.

This broader economic trend towards specialization reflects a widening economy that demands more specific skills from the workforce as it grows.

Skills for life

What is clear is that interpersonal skills are unlikely to be rendered obsolete by technological innovation or economic disruptions. In a changing workforce, it's having a strong foundation in these versatile, cross-functional skills that allows people to successfully pivot.

Learning the latest or hottest technology skills shouldn’t come at the expense of investing in the basic, core skills that people need to be successful in the workforce.

Helping governments to better understand, analyse and approach the development of their human capital in this way is our ultimate hope.


Source: We Forum

 

If you’re interested in a career in career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Why AI isn't taking the Human out of Human Resources

Why AI isn't taking the Human out of Human Resources

Looking for a job and getting hired used to be quite simple. Job listings were put into the local newspaper or on a town job board until the position was filled. As quaint and comprehensive as the methods were, the world’s ever-growing population made them obsolete. In today’s world, more jobs are being posted and more candidates are applying for them. Fortunately, technology has advanced with the years and AI was created to help both candidates and employers struggling to meet their needs.

Looking for a job and getting hired used to be quite simple. Job listings were put into the local newspaper or on a town job board until the position was filled. As quaint and comprehensive as the methods were, the world’s ever-growing population made them obsolete. In today’s world, more jobs are being posted and more candidates are applying for them. Fortunately, technology has advanced with the years and AI was created to help both candidates and employers struggling to meet their needs.

LET’S SET THE RECORD STRAIGHT

AI is not coming to take over the world and eliminate humans. Well, at least not in the world of HR Tech. AI is just another tool for those on the daily grind like you and I. Until we create a legitimate artificial consciousness, let’s just agree that AI isn’t going to replace any human jobs. Rather, they will enable humans to pursue more specialized work. They allow Human Resource workers to finally focus on working with humans. Now let’s see how they already are doing just that.

AI Optimizes Job Descriptions

AI is often used to solve a problem before it presents itself. One example is using AI to confirm the utility of the job descriptions presented in your job listings. Often HR personnel has to spend an unfortunate amount of time reading unqualified candidates as a result of a vague or inaccurate description. AI can use data from millions of other job posts to ensure that the information is properly targeted at the candidates who are able to fill your job opening.

AI Eliminates Repetitive Tasks

The problem AI solves isn’t just the boredom that comes with repetition, but the mind-numbing frustration that comes with inconsistencies in resumes and the sifting required to pull the needed information. Instead of having an employee blindly fill out their resume and send it in for consideration, why not have them talk to a chatbot and automate the information you need for your job opening? That’s precisely what many companies are doing today. Here, AI is doing all of the information parsing for the hiring manager so they can focus on the most human part of Human Resources.

They have the opportunity to devote their attention and resources to the interview and the relationship building with their candidates. This process also helps to eliminate any potential bias or discrimination. An AI can’t have any preconceived notions of a candidate based on gender, race, religious affiliations, etc. Having a reliable source for choosing candidates without running into any legal troubles is invaluable in itself.

AI Makes the Onboarding Process Nice and Smooth

There are a lot of tasks for a new recruit and there are many ways AI can help them through the transition into the office. Contractual paperwork tends to be one of the more overbearing prospects when taking on a new person. They are legally obligated to read it, understand it, fill in all the blanks and never lose it. AI can help organize and keep track of all of these things and more. Forms, login credentials and schedules can be all organized using AI. Again, this opens up the employer to focus on the human connections and building relationships.

AI chatbots can also help answer any questions about HR policies. This proves to be a more efficient way of expressing the company’s expectations when necessary. Often companies policies are available for all to view but are less simple to search through for answers. AI ensures that if there are any uncertainties, your new and old recruits will always have a swift means of verifying their actions.

As AI continues to develop, we will be able to take our hands off of more tasks that keep us from the face-to-face communication and other work that AI can’t do. Finding and accepting new employees is still a human process. It requires a real connection and a true understanding of what need is being filled and by whom. AI is the perfect tool to elevate this process and help the HR realm evolve.

 Source: Social Hire

If you’re interested in a career in Artificial Intelligence or Robotics call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Artificially Intelligent Robot Predicts Its Own Future by Learning Like a Baby

Artificially Intelligent Robot Predicts Its Own Future by Learning Like a Baby

For toddlers, playing with toys is not all fun-and-games—it’s an important way for them to learn how the world works. Using a similar methodology, researchers from UC Berkeley have developed a robot that, like a child, learns from scratch and experiments with objects to figure out how to best move them around. And by doing so, this robot is essentially able to see into its own future

For toddlers, playing with toys is not all fun-and-games—it’s an important way for them to learn how the world works. Using a similar methodology, researchers from UC Berkeley have developed a robot that, like a child, learns from scratch and experiments with objects to figure out how to best move them around. And by doing so, this robot is essentially able to see into its own future.

robotic learning system developed by researchers at Berkeley’s Department of Electrical Engineering and Computer Sciences visualizes the consequences of its future actions to discover ways of moving objects through time and space. Called Vestri, and using technology called visual foresight, the system can manipulate objects it’s never encountered before, and even avoid objects that might be in the way.

 

Importantly, the system learns from a tabula rasa, using unsupervised and unguided exploratory sessions to figure out how the world works. That’s an important advance because the system doesn’t require an army of programmers to code in every single possible physical contingency which, given how complicated and varied the world is, would be a hideously onerous (and even intractable) task. In future, scaled-up versions of this self-learning predictive system could make robots more adaptable in factory and residential settings, and help self-driving vehicles anticipate future events on the road.

Led by UC Berkeley assistant professor Sergey Levine, the researchers built a robot that can predict what it’ll see through a camera if it performs a certain sequence of movements. As noted, the system is not pre-programmed, and instead learns through a process called model-based reinforcement learning. It sounds fancy, but it’s similar to the way a toddler learns how to move objects around through repetition and trial-and-error. Child psychologists call this “motor babbling,” and the UC Berkeley researchers applied the same methodology and terminology to Vestri.

“In the same way that we can imagine how our actions will move the objects in our environment, this method can enable a robot to visualize how different behaviors will affect the world around it,” said Levine in a statement. “This can enable intelligent planning of highly flexible skills in complex real-world situations.”

To train the system, the researchers let the robot “play” with several objects on a small table. A form of artificial intelligence known as deep learning was applied to recurrent video prediction, allowing the bot to foresee how an image’s pixels would move from one frame to another based on its movements. In tests, the robot’s self-acquired model of the world allowed it to move objects it’s never dealt with before, and move them to desired locations (sometimes having to move the objects around obstacles).

“Children can learn about their world by playing with toys, moving them around, grasping, and so forth. Our aim with this research is to enable a robot to do the same: to learn about how the world works through autonomous interaction,” Levine said. “The capabilities of this robot are still limited, but its skills are learned entirely automatically, and allow it to predict complex physical interactions with objects that it has never seen before by building on previously observed patterns of interaction.”

As Levine notes, the system is still pretty basic, and it can only “see” a few seconds in the future. Eventually, a self-taught system like this could learn the lay-of-the-land inside a factory, and have the foresight to avoid human workers and other robots who may be in the same environment. It could also be applied to autonomous vehicles where this predictive model could, for instance, allow it to pass a slow-moving vehicle by moving into the on-coming traffic lane, or avoid a collision.

For Levine’s team, the next step will be to get the robot to perform more complex tasks, such as picking-up and placing down objects, and manipulating soft and malleable objects like cloth, rope, and fragile objects. This latest research will be presented later today at the Neural Information Processing Systems conference in Long Beach, California.

Source: Gizmodo

 

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

7 Types of Profile Pics You Should Never Post on LinkedIn, According to Recruiters

7 Types of Profile Pics You Should Never Post on LinkedIn, According to Recruiters

Recruiters spend a lot of time on LinkedIn combing through thousands of profiles looking for people who match their requirements. In order to make the process more efficient, recruiters must weed out people based on what they view.

Recruiters spend a lot of time on LinkedIn combing through thousands of profiles looking for people who match their requirements. In order to make the process more efficient, recruiters must weed out people based on what they view. There's a famous phrase by Doris Day, "People hear what they see." Recruiters (and, anyone else who looks at your profile), literally imagine what you're like based on your photo. Over time, as they talk to hundreds of candidates, recruiters naturally start to form opinions, also known as candidate bias, towards people with certain things on their profiles. Let's face it, hiring is discrimination. Recruiters must find a way to narrow down the numerous number of candidates. Which means, something as simple as your profile picture can determine if you get contacted.

If a picture is worth a thousand words, then these scream, "Don't hire me!"

I asked a large group of recruiters I know for their biggest pet peeves on candidates' LinkedIn profiles. The feedback was overwhelming. There were many things that annoy them. But, the overwhelming response was centered on profile pictures. Here are the top seven epic fails you can make on LinkedIn with your photo:

The "my puppy is the cutest" photo. Heather L. says, "I don't want to see pictures of your cats, dogs, car, etc.... I really don't need to see fun pics." Consider this: for every dog-lover out there, there's a recruiter that's a cat person. Don't ruin your chances by oversharing about your preferences.

The "I'm a woodsman" photo. Rebecca S. says, "I saw one with a cut up deer in a wheel barrel. It was AWFUL!" LinkedIn is NOT the place to try to look strong, intense, or unique. You are trying to get a job. You should look as friendly and approachable as possible.

The "I'm best man material" photo. Kendra S. says, "I had to ask a candidate to replace a picture of himself in tux holding a Heineken bottle. Had to explain Best Man title would not be applicable nor relevant for winning job." While they say everyone looks better dressed up, the tux is overkill. Better still, keep it to a headshot so your clothing (and, beer choice), isn't judged.
 

The "I'm a mystery" photo. Amber S. says, "Not smiling in the picture or doing the smirk smile." As mentioned earlier, the goal of a profile picture is to look approachable. The smirk can be interpreted as cocky, conniving, and sassy. No smile can appear too serious and anxious. Find your natural smile and let it shine through in the photo. Make sure your eyes are smiling too.

The "I'm sexy and I know it" photo. Jennifer F. says, "Inappropriate profile pics. I've seen candidate's pics from their boudoir photo shoot. This is a business networking site. If you don't have a headshot, stand in front of a blank wall in appropriate business attire, and have someone take your picture." In a time when the #MeToo movement is changing the workplace as we know it, sexy photos are a complete no-no.

The "but, it had the best light" photo. Dave T. says "I hate car selfies." and Stacy J. says, "anything too cutesy or unprofessional." Don't put up a picture just because the lighting was good. Or, you think you look adorable. This isn't a dating app.

And the worst offender? No photo at all. DeAnna T. says, "A profile with no picture." In fact, most of the recruiters agreed a lack of a photo is an immediate eliminator. Why? To them, it usually means the person either has something hide, they aren't tech-savvy, the profile is fake, or the profile has been abandoned by someone who was too lazy to care about how their professional persona looks on LinkedIn.

P.S. - It doesn't stop at the photo.

Your entire profile is being judged. The headline, summary, and work history are equally important. The right amount of text and the appropriate keywords are both critical to making a good first impression with your LinkedIn profile. Taking time to understand what a well-optimized profile looks like can dramatically increase the number of views and outreaches you get from recruiters.

Source: Inc

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Binary randomization makes large-scale vulnerability exploitation nearly impossible

Binary randomization makes large-scale vulnerability exploitation nearly impossible

One of the main reasons cyber risk continues to increase exponentially is due to the rapid expansion of attack surfaces – the places where software programs are vulnerable to attack or probe by an adversary. Attack surfaces, according to the SANS Institute, can include any part of a company’s infrastructure that exposes its networks and systems to the outside, from servers and open ports to SQLs, email authentication and even employees with “access to sensitive information.” It can also include user input via keyboard or mouse, network traffic and external hardware that is not protected by cyberhardening technology

One of the main reasons cyber risk continues to increase exponentially is due to the rapid expansion of attack surfaces – the places where software programs are vulnerable to attack or probe by an adversary. Attack surfaces, according to the SANS Institute, can include any part of a company’s infrastructure that exposes its networks and systems to the outside, from servers and open ports to SQLs, email authentication and even employees with “access to sensitive information.” It can also include user input via keyboard or mouse, network traffic and external hardware that is not protected by cyberhardening technology.

It would be easy to blame the Internet of Things (IoT) for the expanding attack surfaces, as Intel projects two billion smart devices worldwide by 2020. But in reality, the IoT is only part of the attack surface epidemic.

According to Cybersecurity Ventures, there are now 111 billion new lines of code written each year, introducing vulnerabilities both known and unknown. Not to be overlooked as a flourishing attack vector are humans, which some argue are both the most important, but also the weakest link in the cyberattack kill chain. In fact, in many cybersecurity circles there is a passionate and ongoing debate regarding just how much burden businesses should put on employees to prevent and detect cyber threats. What is not up for debate, however, is just how vulnerable humans are to intentionally or unintentionally opening the digital door for threat actors to walk in. This is most evident by the fact that 9 out of 10 cyberattacks begin with some form of email phishing targeting workers with mixed levels of cybersecurity training and awareness.

Critical Infrastructure Protection Remains a Challenge

Critical infrastructure, often powered by SCADA systems and equipment now identified as part of the Industrial Internet of Things (IIoT) is also a major contributor to attack surface expansion. Major attacks targeting these organizations occur more from memory corruption errors and buffer overflows exploits than from spear-phishing or email spoofing and tend to be the motive of nation states and cyber terrorists more so than generic hackers.

As mentioned in our last blog post, “Industrial devices are designed to have a long-life span, but that means most legacy equipment still in use was not originally built to achieve automation and connectivity.” The IIoT does provide many efficiencies and cost-savings benefits to companies in which operational integrity, confidentiality and availability are of the utmost importance, but the introduction of technology into heavy machinery and equipment that wasn’t built to communicate outside of a facility has proven challenging. The concept of IT/OT integration, which is meant to merge the physical and digital security of corporations and facilities, has failed to reduce vulnerabilities in a way that significantly reduces risk. As a result, attacks seeking to exploit critical infrastructure vulnerabilities, such as WannaCry, have become the rule and not the exception.

What if Luke Couldn’t Destroy the Death Star? 

To date, critical infrastructure cybersecurity has relied too much upon network monitoring and anomaly detection in an attempt to detect suspicious traffic before it turns problematic. The challenge with this approach is that it is reactionary and only effective after an adversary has breached some level of defenses.

We take an entirely different approach, focusing on prevention by denying malware the uniformity it needs to propagate. To do this, we use a binary randomization technique that shuffles the basic constructs of a program, known as basic blocks, to produce code that is functionally identical, but logically unique. When an attacker develops an exploit for a known vulnerability in a program, it is helpful to know where all the code is located so that they can repurpose it to do their bidding. Binary randomization renders that prior knowledge useless, as each instance of a program has code in different locations.

One way to visualize the concept of binary randomization is to picture the Star Wars universe at the time when Luke Skywalker and the Rebel Alliance set off to destroy the Death Star. The Rebel Alliance had the blueprints to the Death Star and used those blueprints to find its only weakness. Luke set off in his X-Wing and delivered a proton torpedo directly to the weak spot in the Death Star, destroying it. In this scenario, the Death Star is a vulnerable computer program, and Luke is an adversary trying to exploit said computer program.

Now imagine that the Galactic Empire built 100 Death Stars, each protected by RunSafe’s new Death Star Weakness Randomization. This protection moves the weakness to a different place on each Death Star. Now imagine you are Luke, flying full speed toward the weakness in the Death Star, chased by TIE fighters, only to find that the weakness is not where the blueprint showed. The Rebel attack fails, and the Galactic Empire celebrates by destroying another planet. Similar to the Death Star scenario above, code protected with binary randomization will still contain vulnerabilities, but an attacker’s ability to successfully exploit that vulnerability on multiple targets becomes much more difficult.

As critical infrastructure attack surfaces continue to expand, binary randomization is poised to reduce capacity of attackers to exploit vulnerabilities because each instance of the program is unique, making large-scale exploitation of a program nearly impossible – even for Luke Skywalker himself.

 

Source: IIoTworld

If you’re interested in a career in IoT call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Can Artificial Intelligence be trusted?

Can Artificial Intelligence be trusted?

AI is pretty amazing from our perspective, we use it daily in our algo analysis work of the cryptocurrency markets, and it even helped us personally identify some coins that we’d have never even looked at right before they took off (e.g. 42x on a currency over a few days a while back).

But AI could also have it’s dark sides… Elon Musk is crying himself to sleep every night thinking about how terrible it can be and we all know what happens when Skynet went online…

AI is pretty amazing from our perspective, we use it daily in our algo analysis work of the cryptocurrency markets, and it even helped us personally identify some coins that we’d have never even looked at right before they took off (e.g. 42x on a currency over a few days a while back).

But AI could also have it’s dark sides… Elon Musk is crying himself to sleep every night thinking about how terrible it can be and we all know what happens when Skynet went online…

So can you really trust AI?

I’ll tell you a little secret, you’re already trusting AI and using it every day.

I live in the US, close enough to the Canadian border to make a drive to Montreal in a few hours. I have no idea how to get to Montreal from my house, and no idea how to get around the city once I’m there, yet I managed to find my way over pretty easily and navigate to my favorite spots (PM me for a yummy poutine recommendation…)

How did i do that? I used Google Maps, an AI powered application that finds the best and fastest way for you to get from one place to another in many places around the globe.

I’m old enough to remember road trips where you’d have to take a map, figure out the different waypoints and directions and in many cases, stop at a gas station to find out where you are.

Then there was Mapquest, you’d print out directions on paper, and try and figure out if they are leading you to where you want to go or to an early grave in a ditch by the side of some backroad.

Then came GPS, and we let it tell us where to go to a point of danger and loss of life. (https://theweek.com/articles/464674/8-drivers-who-blindly-followed-gps-into-disaster)

Now we have Google maps, and the level of trust is pretty much absolute. I know some people who use it to drive to work every day, the same route, over and over again, and they would be lost without it…

Compared to the days of paper maps, this is awesome! I can now relax, not worry about getting lost or driving off a bridge and spend time enjoying the ride and time with my family and friends.

This is the scale of trust in technology, we slowly take small steps and increase our trust in it until we hand over an entire task to it and feel happy with the results.

I’m writing this today because I recently took the next step on the scale of trust with something that we all have issues with – Money.

I don’t remember the first time I bought something online, it was too long ago, but it was an historic moment for me, stepping into the world of e-Commerce and trusting my money with someone that I don’t see.

I do remember the first time I bought something on a mobile device, another personal moment of taking the leap and trusting a new technology with money. (it was a camera lens I bought on a really poorly designed website (mobile websites weren’t much of a thing back then) while eating lunch at a restaurant in Chicago). Now I was spending money outside of my comfort zone, just at a random place in the street over a wireless connection that I didn’t control.

Yesterday I took another step trusting technology with my money. I let an AI algorithm make multiple purchases of cryptocurrencies without asking me for permission or even telling me what those currencies are before hand.

This is equivalent to getting into a car with Google Maps. From this point on, I don’t have to worry about trading anymore. I have an AI to do it for me.

When I did my own trading, there was a lot of fear and stress involved, also lots of uncertainty and self doubt. Surprisingly, letting a machine make these decisions for me was a very calm and relaxing experience. It took away all the fear and emotion from trading and left me with trust. I’ve seen what this AI could do in the past, and I trust it implicitly.

So can you trust AI? So far, I’d say yes. Killer robots and time travelling governors of California? That’s for another post.

Source: Tokenai

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

 

Algorithms will out-perform Doctors in just 10 years time

Algorithms will out-perform Doctors in just 10 years time

The power of Algorithms to calculate, contemplate and anticipate the needs of patients is improving rapidly and still has no sign of slowing down. Everything from patient diagnosis to therapy selection will soon be moving at exponential rates. Does that mean the end of doctors? Not quite. To better understand technology’s ever-growing role in healthcare, we first have to better examine the potential of tools and timelines that we are working with.

The power of Algorithms to calculate, contemplate and anticipate the needs of patients is improving rapidly and still has no sign of slowing down. Everything from patient diagnosis to therapy selection will soon be moving at exponential rates. Does that mean the end of doctors? Not quite. To better understand technology’s ever-growing role in healthcare, we first have to better examine the potential of tools and timelines that we are working with. A recent study done at Beth Israel Deaconess Medical Center (BIDMC) and Harvard Medical School showed that AI isn’t about Humans versus Machines. They trained a Deep Learning Algorithm to identify Metastatic Breast Cancer by interpreting pathology images. Their algorithm reached an Algorithm Accuracy of 92.5%, where pathologists reached an accuracy of 97%. But used in combination, the detection rate approached 100 percent (approximately 99.5 percent). It is exactly this kind of collaboration between humans and machines that is going to play a vital role in the age of AI and we already have a blueprint of how a productive partnership could look.

Digitization in the next Decade

10 years is a long time, when you consider that during this period we will have access to new neurosynaptic processing power such as IBM’s TrueNorth or cloud based quantum computing. Ten years ago the iPhone got introduced which led to the development of 180.000 registered health apps, which equals 50 apps a day. Yes, a large part of them aren’t useful, but we can’t ignore the impact apps had on patients and clinicians. During the last 5 years we have seen error rates on speech and image recognition drop by over 20 percent to nearly human accuracy. So it is not a long shot to predict that, soon algorithms will over-perform humans on specific tasks such as diagnosing disease or selecting the best personalized treatment plan. We can’t ignore technology that, depending where you live, can deliver 10 to 100 times better results. A new study is published every month, which proves this potential: even today, such diagnostic algorithms have an error rate of only 5% when detecting melanoma. Among the best human specialists, it is 16%.

In medicine, error rates have not been the subject of much discussion until now. That’s not because they were not of serious importance, but because they were inevitable – to err is human.

Today’s doctors are no longer in a position to know everything that is being published – on average, 800,000 studies per year are published in more than 5,600 medical journals. What person can could ever hope to process all of that? With the current pace of advancements in AI one can easily assume that in 10 years from now algorithms will over-perform humans on 80% of today’s classified diagnosis.

I refer specifically to “today’s classified diagnosis” because I believe that the impact of precision medicine will also bring about a complete change in medicine, and we will have to re-write the textbook of medicine.

Thanks to new technologies from genome diagnostics and the application of artificial intelligence, we are able to better understand and influence the development of diseases and aging processes. This means that in the near future it will no longer be primarily a question of treating diseases, but of preventing them.

The End of Doctors?

There are many claims that new technology will eventually replace doctors. Personally, I hope they don’t. Studies have already shown that diagnostic and treatment quality is much better when human physicians and algorithms work together. Chess was one of the first areas that was taken over and subsequently dominated by machines almost 20 years ago. After Garry Kasparov, the reigning world champion at the time, lost to the IBM Computer ‘Deep Blue’ in 1997, the head to head contest between humans and machines lost much of it’s appeal. Today no human, not even the grandest of all grand masters, can beat even a mid-tier chess program running on an iPhone. After this huge symbolic victory for the machines there was doubt that humans could contribute something meaningful to the world of chess ever again. But the advent of so called ‘freestyle chess’ tournaments showed how much humans still had to offer the game of chess. These events are played by teams that can include any combination of human and machine players. The surprising insight of those tournaments is that the teams with the strongest human / machine partnership dominate even the strongest computer. Kasparov himself explained the results of a 2005 freestyle tournament like this:

The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants.

What helped the team win didn’t have anything to do with being the best chess players or having the most powerful chess computer but having the best process of collaborating with machines. It was all about the partnership and the complementary interplay between humans and machines. Humans still have a lot to offer to the game of chess if they are not racing against the machines but with the machines. To achieve the best results humans and machines have to collaborate — they have to become partners. But this requires a new set of skills and a new way of thinking on the part of humans.

We also know that in our current healthcare systems we are lacking the human element. Recent studies have shown that our social determinants count for more as 50% to our health status. From a 7 minute consultation a physician spent less then 20% for true human interaction, and is focussed on collecting clinical data, reasoning, documenting, administrating and coordinating. But one of the most important parts of care delivery, empathy and compassion have become neglected. This already starts at medical school.

Compassionate care makes a difference in how well a patient recovers from illness. In healthcare good communication and emotional support sometimes decides whether a patient lives or dies, but today there is no billing code for compassion. Ken Schwarz, the founder the Schwartz Center, believed that acts of kindness — the simple human touch from my caregivers — have made the unbearable bearable. So if a 7 minute consultation today, involves 6 minutes of activities that will be automated, can we please fight for a system that rewards compassion and other human values that are so desperately needed in the healthcare system and will hardly be replaced by robots or machines.

Future Insurance Policies

With more data from machines also comes more empowerment for patients. People will have more and more monitoring tools available, and diagnostics will become increasingly decentralized. That is why patients will take much more responsibility for themselves. Like doctors, however, this does not mean that GPs will become superfluous; they will probably communicate them more frequently online in the future. Diabetes patients, for example, can already bring their diabetes level to normal levels without any medication, simply by using monitoring tools in conjunction with online coaching from their doctor.

Thanks to precision medicine, we will soon be able to measure so-called “biomarkers” in our bodies, which will enable us to read biological processes in our bodies and from them diagnoses and prognoses, e. g. from our breath. Such tools are already available today, and thanks to them we will soon be able to detect certain types of cancer, such as lung cancer, at a very early stage. And if we detect cancer much sooner, we can treat it much earlier. That also means an insurance provider could save money for their organization and their customers by identifying diseases at a stage where treatment is less costly.

Conclusion

It is important to recognize that as technology’s role in the health sector expands as a result of increased capabilities, many things are subject to change. That does not mean, however, that the roles of humans will disappear as much as they will transform. Some researchers have started to train robots and AI systems to mimic empathy. But can we seriously believe a robot will over-perform humans when it comes to delivering bad messages? Or do you really want us to hear from a robot that I have 7 months and 5 days and 3 hours to live?

It is time we start to actively lead and design our future healthcare systems, so we also have time to redefine the value system that healthcare is based on. It’s up to all of us to define what future we want. These new value systems should focus on activities that won’t particularly be done better by machines.

 

Source: Dataeconomy

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

World's First 3D Printed Human Corneas

World's First 3D Printed Human Corneas

Scientists at Newcastle University have 3D printed the world's first human corneas.
By creating a special bio-ink using stem cells mixed together with alginate and collagen, they were able to print the cornea using a simple low-cost 3D bio-printer.
It's hoped, after further testing, that this new technique could be used to help combat the world-wide shortage of corneas for the 15 million people requiring a transplant.

The first human corneas have been 3D printed by scientists at Newcastle University.

This means that the technique could be used in the future to ensure an unlimited supply of corneas.

As the outermost layer of the human eye, the cornea has an important role in focusing vision.

Yet there is a significant shortage of corneas available to transplant, with 10 million people worldwide requiring surgery to prevent corneal blindness as a result of diseases such as trachoma, an infectious eye disorder.

In addition, almost 5 million people suffer total blindness due to corneal scarring caused by burns, lacerations, abrasion or disease.

The proof-of-concept research, published today in Experimental Eye Research, reports how stem cells (human corneal stromal cells) from a healthy donor cornea were mixed together with alginate and collagen to create a solution that could be printed, a ‘bio-ink’.

Using a simple low-cost 3D bio-printer, the bio-ink was successfully extruded in concentric circles to form the shape of a human cornea. It took less than 10 minutes to print.

The stem cells were then shown to culture – or grow.

 

Unique bio-ink

Che Connon, Professor of Tissue Engineering at Newcastle University, who led the work, said: “Many teams across the world have been chasing the ideal bio-ink to make this process feasible.

“Our unique gel - a combination of alginate and collagen - keeps the stem cells alive whilst producing a material which is stiff enough to hold its shape but soft enough to be squeezed out the nozzle of a 3D printer.

“This builds upon our previous work in which we kept cells alive for weeks at room temperature within a similar hydrogel. Now we have a ready to use bio-ink containing stem cells allowing users to start printing tissues without having to worry about growing the cells separately.”  

The scientists, including first author Ms Abigail Isaacson from the Institute of Genetic Medicine, Newcastle University, also demonstrated that they could build a cornea to match a patient’s unique specifications.

The dimensions of the printed tissue were originally taken from an actual cornea. By scanning a patient’s eye, they could use the data to rapidly print a cornea which matched the size and shape.

Professor Connon added: “Our 3D printed corneas will now have to undergo further testing and it will be several years before we could be in the position where we are using them for transplants.

“However, what we have shown is that it is feasible to print corneas using coordinates taken from a patient eye and that this approach has potential to combat the world-wide shortage.”

Significant progress

Dr Neil Ebenezer, director of research, policy and innovation at Fight for Sight, said: “We are delighted at the success of researchers at Newcastle University in developing 3D printing of corneas using human tissue. 

“This research highlights the significant progress that has been made in this area and this study is important in bringing us one step closer to reducing the need for donor corneas, which would positively impact some patients living with sight loss.

“However, it is important to note that this is still years away from potentially being available to patients and it is still vitally important that people continue to donate corneal tissue for transplant as there is a shortage within the UK. 

“A corneal transplant can give someone back the gift of sight.”

Reference: 3D Bioprinting of a Corneal Stroma Equivalent. Abigail Isaacson, Stephen Swioklo, Che J. Connon. Experimental Eye 


Source: NCL

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

10 Powerful Examples Of Artificial Intelligence In Use Today

10 Powerful Examples Of Artificial Intelligence In Use Today

The machines haven't taken over. Not yet at least. However, they are seeping their way into our lives, affecting how we live, work and entertain ourselves. From voice-powered personal assistants like Siri and Alexa, to more underlying and fundamental technologies such as behavioral algorithms, suggestive searches and autonomously-powered self-driving vehicles boasting powerful predictive capabilities, there are several examples and applications of artificial intellgience in use today.

The machines haven't taken over. Not yet at least. However, they are seeping their way into our lives, affecting how we live, work and entertain ourselves. From voice-powered personal assistants like Siri and Alexa, to more underlying and fundamental technologies such as behavioral algorithms, suggestive searches and autonomously-powered self-driving vehicles boasting powerful predictive capabilities, there are several examples and applications of artificial intellgience in use today.

However, the technology is still in its infancy. What many companies are calling A.I. today, aren't necessarily so. As a software engineer, I can claim that any piece of software has A.I. due to an algorithm that responds based on pre-defined multi-faceted input or user behavior. That isn't necessarily A.I.

A true artificially-intelligent system is one that can learn on its own. We're talking about neural networks from the likes of Google's DeepMind, which can make connections and reach meanings without relying on pre-defined behavioral algorithms. True A.I. can improve on past iterations, getting smarter and more aware, allowing it to enhance its capabilities and its knowledge.

That type of A.I., the kind that we see in wonderful stories depicted on television through the likes of HBO's powerful and moving series, Westworld, or Alex Garland's, Ex Machina, are still way off. We're not talking about that. At least not yet. Today, we're talking about the pseudo-A.I. technologies that are driving much of our voice and non-voice based interactions with the machines -- the machine-learning phase of the Digital Age.

While companies like Apple, Facebook and Tesla rollout ground-breaking updates and revolutionary changes to how we interact with machine-learning technology, many of us are still clueless on just how A.I. is being used today by businesses both big and small. How much of an effect will this technology have on our future lives and what other ways will it seep into day-to-day life? When A.I. really blossoms, how much of an improvement will it have on the current iterations of this so-called technology?

A.I. And Quantum Computing

The truth is that, whether or not true A.I. is out there or is actually a threat to our existence, there's no stopping its evolution and its rise. Humans have always fixated themselves on improving life across every spectrum, and the use of technology has become the vehicle for doing just that. And although the past 100 years have seen the most dramatic technological upheavals to life than in all of human history, the next 100 years is set to pave the way for a multi-generational leap forward.

This will be at the hands of artificial intelligence. A.I. will also become smarter, faster, more fluid and human-like thanks to the inevitable rise of quantum computing. Quantum computers will not only solve all of life's most complex problems and mysteries regarding the environment, aging, disease, war, poverty, famine, the origins of the universe and deep-space exploration, just to name a few, it'll soon power all of our A.I. systems, acting as the brains of these super-human machines.

However, quantum computers hold their own inherent risks. What happens after the first quantum computer goes online, making the rest of the world's computing obsolete? How will existing architecture be protected from the threat that these quantum computers pose? Considering that the world lacks any formidable quantum resistant cryptography (QRC), how will a country like the United States or Russia protect its assets from rogue nations or bad actors that are hellbent on using quantum computers to hack the world's most secretive and lucrative information?

In a conversation with Nigel Smart, founder of Dyadic Security and Vice President of the International Association of Cryptologic Research, a Professor of Cryptology at the University of Bristol and an ERC Advanced Grant holder, he tells me that quantum computers could still be about 5 years out. However, when the first quantum computer is built, Smart tells me that:

"...all of the world's digital security is essentially broken. The internet will not be secure, as we rely on algorithms which are broken by quantum computers to secure our connections to web sites, download emails and everything else. Even updates to phones, and downloading applications from App stores will be broken and unreliable. Banking transactions via chip-and-PIN could [also] be rendered insecure (depending on exactly how the system is implemented in each country)."

Clearly, there's no stopping a quantum computer led by a determined party without a solid QRC. While all of it is still what seems like a far way off, the future of this technology presents a Catch-22, able to solve the world's problems and likely to power all the A.I. systems on earth, but also incredibly dangerous in the wrong hands.

Applications of Artificial Intelligence In Use Today

Beyond our quantum-computing conundrum, today's so-called A.I. systems are merely advanced machine learning software with extensive behavioral algorithms that adapt themselves to our likes and dislikes. While extremely useful, these machines aren't getting smarter in the existential sense, but they are improving their skills and usefulness based on a large dataset. These are some of the most popular examples of artificial intelligence that's being used today.

#1 -- Siri

Everyone is familiar with Apple's personal assistant, Siri. She's the friendly voice-activated computer that we interact with on a daily basis. She helps us find information, gives us directions, add events to our calendars, helps us send messages and so on. Siri is a pseudo-intelligent digital personal assistant. She uses machine-learning technology to get smarter and better able to predict and understand our natural-language questions and requests.

#2 -- Alexa

Alexa's rise to become the smart home's hub, has been somewhat meteoric. When Amazon first introduced Alexa, it took much of the world by storm. However, it's usefulness and its uncanny ability to decipher speech from anywhere in the room has made it a revolutionary product that can help us scour the web for information, shop, schedule appointments, set alarms and a million other things, but also help power our smart homes and be a conduit for those that might have limited mobility.

#3 -- Tesla

If you don't own a Tesla, you have no idea what you're missing. This is quite possibly one of the best cars ever made. Not only for the fact that it's received so many accolades, but because of its predictive capabilities, self-driving features and sheer technological "coolness." Anyone that's into technology and cars needs to own a Tesla, and these vehicles are only getting smarter and smarter thanks to their over-the-air updates.

#4 -- Cogito

Originally co-founded by CEO, Joshua Feast and, Dr. Sandy Pentland, Cogito is quite possibly one of the most powerful examples of behavioral adaptation to improve the emotional intelligence of customer support representatives that exists on the market today. The company is a fusion of machine learning and behavioral science to improve the customer interaction for phone professionals. This applies to millions upon millions of voice calls that are occurring on a daily basis.

#5 -- Boxever

Boxever, co-founded by CEO, Dave O’Flanagan, is a company that leans heavily on machine learning to improve the customer's experience in the travel industry and deliver 'micro-moments,' or experiences that delight the customers along the way. It's through machine learning and the usage of A.I. that the company has dominated the playing field, helping its customers to find new ways to engage their clients in their travel journeys.

#6 -- John Paul

John Paul, a highly-esteemed luxury travel concierge company helmed by its astute founder, David Amsellem, is another powerful example of potent A.I. in the predictive algorithms for existing-client interactions, able to understand and know their desires and needs on an acute level. The company powers the concierge services for millions of customers through the world's largest companies such as VISA, Orange and Air France, and was recently acquired by Accor Hotels.

#7 -- Amazon.com

Amazon's transactional A.I. is something that's been in existence for quite some time, allowing it to make astronomical amounts of money online. With its algorithms refined more and more with each passing year, the company has gotten acutely smart at predicting just what we're interested in purchasing based on our online behavior. While Amazon plans to ship products to us before we even know we need them, it hasn't quite gotten there yet. But it's most certainly on its horizons.

#8 -- Netflix

Netflix provides highly accurate predictive technology based on customer's reactions to films. It analyzes billions of records to suggest films that you might like based on your previous reactions and choices of films. This tech is getting smarter and smarter by the year as the dataset grows. However, the tech's only drawback is that most small-labeled movies go unnoticed while big-named movies grow and balloon on the platform.

#9 -- Pandora

Pandora's A.I. is quite possibly one of the most revolutionary techs that exists out there today. They call it their musical DNA. Based on 400 musical characteristics, each song is first manually analyzed by a team of professional musicians based on this criteria, and the system has an incredible track record for recommending songs that would otherwise go unnoticed but that people inherently love.

#10 -- Nest

Most everyone is familiar with Nest, the learning thermostat that was acquired by Google in January of 2014 for $3.2 billion. The Nest learning thermostat, which, by the way, can now be voice-controlled by Alexa, uses behavioral algorithms to predictively learn from your heating and cooling needs, thus anticipating and adjusting the temperature in your home or office based on your own personal needs, and also now includes a suite of other products such as the Nest cameras.

Source: Forbes

 

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Why blockchain is not such a bad technology

Why blockchain is not such a bad technology

Before we start, it is important to remember that blockchain and Bitcoin are not the same thing. Bitcoin technology combines several technologies: money transfer principles, cryptographic principles, blockchain proper, the concept of consensus, the proof-of-work principle, peer-to-peer networking, participant motivation, Merkle trees for organizing transactions, transparency principles, hashing, and more.

Therefore, on the one hand, blockchain problems arising from the form in which it is used by Bitcoin are not universal, and it can work differently for other currencies. On the other hand, right now the market is dominated by Bitcoin-like blockchains based on proof-of-work (POW).

Before we start, it is important to remember that blockchain and Bitcoin are not the same thing. Bitcoin technology combines several technologies: money transfer principles, cryptographic principles, blockchain proper, the concept of consensus, the proof-of-work principle, peer-to-peer networking, participant motivation, Merkle trees for organizing transactions, transparency principles, hashing, and more.

Therefore, on the one hand, blockchain problems arising from the form in which it is used by Bitcoin are not universal, and it can work differently for other currencies. On the other hand, right now the market is dominated by Bitcoin-like blockchains based on proof-of-work (POW).

Problem: Blockchain is slow and inefficient

Bitcoin’s throughput is seven transactions per second, not for each participant, but for the whole network. And for Ethereum, the second-best in terms of capitalization, it is 15 simple money transfers and 3–5 smart contracts per second.

The POW principle accepted for most currencies guarantees that electricity consumption and the amount of hardware will grow until mining becomes unprofitable. However, growth of overhead costs never improves the quality of the services provided — it’s always 7 transactions per second, no matter how many miners are there and how much electricity they burn.

The Lightning Network

Experts have long been concerned about the problem of insufficient transaction speed in the Bitcoin system, and to address it, they invented the Lightning Network.

This is how it works — or, how it will work, once it is launched: First, certain network participants who need a faster transaction rate set up a separate channel — consider it a kind of private chat room — and, as a guarantee of integrity, make a deposit in the main Bitcoin network. Then they start exchanging payments separately from the rest of the network — at any speed. When the channel is no longer needed, the participants record the results of the interaction in a public blockchain and, assuming no one violated the rules, receive their deposit back.

Optimistic predictions have the Lightning Network launching as early as this year, enabling millions of transactions per second. So much for “slow.”

Problem: Blockchain is bulky

Blockchain is bulky, but that stopped being a problem after some trust was built on the network. In fact, you don’t have to download and check everything to believe the likelihood of deception is very low.

Web wallets

First of all, existing Web wallets and Web services store everything and do all of the work for you. If no one complains about a certain service, it can very well be considered reliable and somewhat trusted.

It also comes with an important advantage compared with traditional payment systems. If one Web wallet closes, you can simply switch to another one, because they have the same transaction records — blockchain is the only one. Compare that with what would happen if your regular bank encountered a glitch or went bankrupt and you needed to switch banks.

Thin wallets

Satoshi himself described another, more advanced (and more reliable) method back in 2008. Instead of storing and processing the entire 100GB blockchain, you can download and check just the block headers, as well as proof of correct transactions that are directly connected to you.

If many random network nodes that you are talking to report the block headers are exactly the same, you may rather confidently say that everything is correct.

At the moment, the headers of all existing blocks take up only 40MB, which isn’t much. But you can save even more: You don’t have to store the headers of every transaction that ever happened; you could start with a specific moment.

Problem: Blockchain is not scalable

A system’s scalability refers to its ability to improve with the addition of resources. The classic blockchain is indeed completely unscalable; adding resources does not affect the speed of transactions at all.

It’s interesting that the classic blockchain is scalable neither up nor down: If you built a small system for solving local problems based on the same principles, it would be vulnerable to a so-called 51% Attack — anyone with enough computing power could come in, immediately take over, and be able to rewrite history.

Plasma

Joseph Poon (the inventor of the Lightning Network) and Vitalik Buterin (a cofounder of Ethereum) recently proposed a new solution. They call it Plasma.

Plasma is a framework for making a blockchain of blockchains. The concept is similar to that of the Lightning Network, but it was developed for Ethereum. Here is how it works: Someone makes a deposit in the main Ethereum network and starts talking to other clients independently and separately, supervising the execution of his or her smart contract and the general rules of Ethereum on their own. A smart contract is a mini-program for working with money and Web wallets. It is the key feature of Ethereum.

From time to time, the results of these individual communications are recorded in the main network. Also, as with the Lightning Network, all participants oversee the execution of the smart contract and complain if something is not right.

So far, the proposal is just a draft, but if the concept is successfully implemented, the problem of blockchain scalability will be a thing of the past.

Problem: Miners are burning up the planet’s resources

Proof-of-work is the most popular method of reaching a consensus in the cryptocurrencies. A new block is created after lengthy calculations performed solely to prevent rewriting of the financial history. POW network miners burn a lot of electricity, and the number of megawatts wasted is regulated not by safety concerns or common sense, but rather by economics: Capacities expand as long as current cryptocurrency exchange rate keeps mining profitable.

Proof-of-stake

An alternative approach to distributing the right to create blocks is called proof-of-stake (POS). Using this concept, the likelihood of creating a block and thus the right to receive an award (in the form of interest or newly emitted currency) depends not on how much computational work you done (how much electricity you burnt), but on how much currency you have in the system.

If you own a third of all coinage, you have a one-third probability of creating a new block, thanks to a random algorithm. This principle is a good reason for participants to obey the rules, because the more of the currency you have, the more interested you are in a properly functioning network and a stable currency rate.

Proof-of-authority

A more radical method exists as well: letting only trusted participants create blocks. For example, 10 hospitals can use a blockchain to keep track of an epidemiological situation in a city. Each hospital has its own signature key as proof of authority. That makes such a blockchain private: Only hospitals can write to it. At the same time, it helps maintain openness, an important quality of the blockchain.

However, proof-of-authority is detrimental to the original blockchain concept: The network effectively becomes centralized.

Resources can be used for good

Some networks do useful work within the proof-of-work concept. They look for prime numbers of a certain type (Primecoin), calculate protein structures (FoldingCoin), or perform other scientific tasks that require a lot of calculations (GridCoin). The reward for “mining” promotes investing more resources in science.

Problem: Blockchain is decentralized and therefore is not developing

It is not very easy to introduce changes into a decentralized network protocol. The developer can either run mandatory updates for all clients — although that kind of network cannot be considered truly decentralized — or persuade all participants to accept the changes. If a significant proportion of them vote against the changes, the community may split: The blockchain will split into two alternative blockchains, and there will be two currencies. That split is called a fork.

Part of the problem is that different participants have different interests. Miners are interested in growing rewards and interest; users want to pay less for transfers; fans want the cryptocurrency to become more popular; and geeks want useful innovations to be added to the technologies.

Two of the largest cryptocurrencies have already split. It happened with Bitcoin not too long ago, when participants were unable to agree on a strategy for expanding block size. A little earlier, something similar happened with Ethereum, the result of a disagreement about if it was fair to cancel a crack on an investment fund and return the money to investors.

How can such situations be avoided?

Tezos

It is possible to encode into a cryptocurrency the ability to vote on modifications. That’s precisely what the cryptocurrency Tezos, which is about to go on the market, did. Primary voting characteristics are as follows:

  1. The more cryptocurrency you hold, the more voting power you have. Mining power is irrelevant.
  2. A vote may be delegated to someone who understands the subject of the current vote better than you do.
  3. Developers are entitled to a veto for one year after launch, and if necessary veto power can be extended.
  4. The initial quorum will be 80%, but that can be changed to conform to actual user activity.

It’s thought this approach will significantly reduce the emotional level and the necessity for hard forks.

When voting on these principles, at some point the majority could well eliminate the minority’s voting rights. In short, the rich may take over. However, Tezos’s developers think that such a takeover would have a negative impact on the value of the currency and therefore is unlikely. We’ll see.

Problem: Blockchain is too transparent

Imagine you’re WikiLeaks and you get donations in bitcoins. Everyone knows your address and how much you have, and when you try to convert your money into dollars in the exchange, then law enforcement will know how much you have in dollars.

You can’t launder your money in Bitcoin. Dividing up the money into 10 wallets only means having 10 accounts associated with you. There are services called mixers or tumblers that move around large sums of money for a fee, to obscure the real owner, but they are inconvenient for a number of reasons.

CoinJoin in Dash

The creators of the cryptocurrency Dash (the former Darkcoin) were the first to try solving the anonymity problem, by using the PrivateSend function. Their approach was simple: They designed a tumbler right into the currency.

There were a few problems. First, if someone (e.g., law enforcement) controls a significant number of the nodes that mix “clean” money with “dirty,” they can observe the transfer. Perhaps an unlikely scenario, but still quite possible.

Second, mixing dirty money with clean makes all of that money look a bit dirty — or “gray.” But for gray money to appear clean, all participants have to use mixing all the time.

CryptoNote in Monero

A more reliable approach was invented: a truly anonymous currency called Monero.

First, Monero uses electronic signatures that permit a group participant (designated by the cell) to sign a message on behalf of the group and also prevents anyone from ascertaining who signed it. This ability permits the sender to hide their own traces. At the same time, the protocol prevents double spending.

Second, Monero uses not only a private key for money transfers, but also an additional private key to see what has arrived in your wallet, making it impossible to see someone else’s transaction history.

Third, some senders may want to generate one-time wallets to keep money that is private and funds coming in from the markets separate. (This recommendation was made long ago over at Bitcoin.)

Conclusion

Our short overview of issues that some talented people have turned to their benefit has come to a close. We could’ve written much more about smart contracts at Ethereum, the bright future of Ripple, or cryptocurrencies without blockchain such as IOTA.

Strictly speaking, the title of this article is inaccurate. We discussed blockchain’s add-ons, not blockchain itself. But that’s the beauty of blockchain: It inspires people to look for ways to improve it

 

Source: Kaspersky

 

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

It's Our Birthday!

It's Our Birthday!

12 years ago, on August 1st 2006, Hanson Regan was born. An initiative by brothers John and Gabriel Kelly, they created the company, whose birthday we celebrate today.

12 years ago, on August 1st 2006, Hanson Regan was born. An initiative by brothers John and Gabriel Kelly, they created the company, whose birthday we celebrate today.

(John & Gabriel 2006)

Over the years Hanson Regan has grown steadily and organically, establishing an
exceptional reputation in the market-place.

"The twelve years have flown by, it became a joint vision really, Gabriel had his and I mine, coming to the same point, in some ways influenced by forces outside us. We didn’t know it but we had to wait until the right conditions presented themselves." John Kelly remarked. 

From the very beginning John and Gabriel set our four core values of Grow, Honest, Professional and Relationships, which are as relevant to us today as they were 12 years ago.

Our people are at the heart of Hanson Regan, we are powerful listeners and by combining our clients vision with our passion for rapid resourcing we are able to make what our clients do really successful. Dedicated account management, full support suites, candidate referencing and rapid precise resourcing are just a few of the ways In which our team at Hanson Regan provides excellence.

"We’d always hoped to be able to help large, multinational clients source perfect contractors enabling them to run their businesses effectively. Now we are, thanks to the great people we have at Hanson Regan, and the processes we adopt to provide the best candidates. Like our clients we support we are continuously improving, growing stronger and building some super honest partnerships: Here’s to the next 12." - John Kelly

(Hanson Regan Team 2018)

Today we are celebrating all the people in our company that make us great! So thank you to each and everyone of you, keep up the fantastic work and enjoy the celebratory breakfast. You all earned it!! 

If you're looking to make a career change, or to wish us a Happy Birthday, you can call us on +44 0208 290 4656 or drop us an email info@hansonregan.com

 

Most of AI’s Business Uses Will Be in Two Areas

Most of AI’s Business Uses Will Be in Two Areas

While overall adoption of artificial intelligence remains low among businesses (about 20% upon our last study), senior executives know that AI isn’t just hype. Organizations across sectors are looking closely at the technology to see what it can do for their business. As they should—we estimate that 40% of all the potential value that can created by analytics today comes from the AI techniques that fall under the umbrella “deep learning,” (which utilize multiple layers of artificial neural networks, so-called because their structure and function are loosely inspired by that of the human brain). In total, we estimate deep learning could account for between $3.5 trillion and $5.8 trillion in annual value.

While overall adoption of artificial intelligence remains low among businesses (about 20% upon our last study), senior executives know that AI isn’t just hype. Organizations across sectors are looking closely at the technology to see what it can do for their business. As they should—we estimate that 40% of all the potential value that can created by analytics today comes from the AI techniques that fall under the umbrella “deep learning,” (which utilize multiple layers of artificial neural networks, so-called because their structure and function are loosely inspired by that of the human brain). In total, we estimate deep learning could account for between $3.5 trillion and $5.8 trillion in annual value.

However, many business leaders are still not exactly sure where they should apply AI to reap the biggest rewards. After all, embedding AI across the business requires significant investment in talent and upgrades to the tech stack as well as sweeping change initiatives to ensure AI drives meaningful value, whether it be through powering better decision-making or enhancing consumer-facing applications.

Through an in-depth examination of more than 400 actual AI use cases across 19 industries and nine business functions, we’ve discovered an old adage proves most useful in answering the question of where to put AI to work, and that is: “Follow the money.”

The business areas that traditionally provide the most value to companies tend to be the areas where AI can have the biggest impact. In retail organizations, for example, marketing and sales has often provided significant value. Our research shows that using AI on customer data to personalize promotions can lead to a 1-2% increase in incremental sales for brick-and-mortar retailers alone. In advanced manufacturing, by contrast, operations often drive the most value. Here, AI can enable forecasting based on underlying causal drivers of demand rather than prior outcomes, improving forecasting accuracy by 10-20%. This translates into a potential 5% reduction in inventory costs and revenue increases of 2-3%.

While applications of AI cover a full range of functional areas, it is in fact in these two cross-cutting ones—supply-chain management/manufacturing and marketing and sales—where we believe AI can have the biggest impact, at least for now, in several industries. Combined, we estimate that these use cases make up more than two-thirds of the entire AI opportunity. AI can create $1.4-$2.6 trillion of value in marketing and sales across the world’s businesses and $1.2-$2 trillion in supply chain management and manufacturing (some of the value accrues to companies while some is captured by customers). In manufacturing, the greatest value from AI can be created by using it for predictive maintenance (about $0.5-$0.7 trillion across the world’s businesses). AI’s ability to process massive amounts of data including audio and video means it can quickly identify anomalies to prevent breakdowns, whether that be an odd sound in an aircraft engine or a malfunction on an assembly line detected by a sensor.

Another way business leaders can home in on where to apply AI is to simply look at the functions that are already taking advantage of traditional analytics techniques. We found that the greatest potential for AI to create value is in use cases where neural network techniques could either provide higher performance than established analytical techniques or generate additional insights and applications. This is true for 69% of the AI use cases identified in our study. In only 16% of use cases did we find a “greenfield” AI solution that was applicable where other analytics methods would not be effective. (While the number of use cases for deep learning will likely increase rapidly as algorithms become more versatile and the type and volume of data needed to make them viable become more available, the percentage of greenfield deep learning use cases might not increase significantly because more established machine learning techniques also have room to become better and more ubiquitous.)

We don’t want to come across as naïve cheerleaders. Even as we see economic potential in the use of AI techniques, we recognize the tangible obstacles and limitations to implementing AI.  Obtaining data sets that are sufficiently large and comprehensive enough to feed the voracious appetite that deep learning has for training data is a major challenge. So, too, is addressing the mounting concerns around the use of such data, including security, privacy, and the potential for passing human biases onto AI algorithms. In some sectors, such as health care and insurance, companies must also find ways to make the results explainable to regulators in human terms: why did the machine come up with this answer? The good news is that the technologies themselves are advancing and starting to address some of these limitations.

Beyond these limitations, there are the arguably more difficult organizational challenges companies face as they adopt AI. Mastering the technology requires new levels of expertise, and process can become a major impediment to successful adoption. Companies will have to develop robust data maintenance and governance processes, and focus on both the “first mile”—how to acquire data and organize data efforts—and the far more difficult “last mile,” how to integrate the output of AI models into work flows, ranging from those of clinical trial managers and sales force managers to procurement officers.

While businesses must remain vigilant and responsible as they deploy AI, the scale and beneficial impact of the technology on businesses, consumers, and society make pursuing AI opportunities worth a thorough investigation. The pursuit isn’t a simple prospect but it can be initiated by evoking a simple concept: follow the money.

 

Source: HBR

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

What’s next for Tech innovation in 2018?

What’s next for Tech innovation in 2018?

One way that Samsung Electronics works with the technology startup community is through Samsung NEXT – an innovation arm that scouts, supports and invests in forward-thinking new software and services businesses and entrepreneurs. By rubbing shoulders with those on the frontline of software innovation, as well as harnessing the insights of its homegrown experts, Samsung is always thinking about how technology, and indeed society, will change. We spoke with members of the Samsung NEXT team—here are the top five technologies that will change people’s lifestyle in 2018.

One way that Samsung Electronics works with the technology startup community is through Samsung NEXT – an innovation arm that scouts, supports and invests in forward-thinking new software and services businesses and entrepreneurs. By rubbing shoulders with those on the frontline of software innovation, as well as harnessing the insights of its homegrown experts, Samsung is always thinking about how technology, and indeed society, will change. We spoke with members of the Samsung NEXT team—here are the top five technologies that will change people’s lifestyle in 2018.

 

 

1. Faster, more transparent machine learning

Artificial intelligence (AI) will dramatically expand within the next 12 months. It is already changing the way people interact with a number of applications, platforms and services across both consumer and enterprise environments.

 

In the next couple of years, there will be new approaches on two fronts. Firstly, less data will be required to train an algorithm. This means an image recognition system that currently needs 100,000 images to learn how to operate will only need a small fraction of that number. This will make it easier to quickly implement powerful machine learning systems.

 

Secondly, the technology will become more transparent. Advances in technology will mean researchers will be able to open the black box of AI and more clearly explain why a particular model made the decision it did. Currently, a lot of academia and start-ups are putting much effort into understanding how a machine makes decisions, how the models are learning from the data and what are the parameters of data that influence the models.

 

 

Scott Phoenix, The CEO of Vicarious, makes a presentation about human-level intelligent robots at the Samsung CEO Summit last October in San Francisco. (source: www.vicarious.com)

 

Samsung plans to build an AI platform under a common architecture that will provide the deepest understanding of usage context and behaviors. This is one of the core strategies to make the user-centric AI ecosystem. Samsung NEXT has also invested in various companies innovating in the field, including Vicarious, a company developing neuroscience-based artificial general intelligence (AGI) for robots for simpler deployment with faster training and Bonsai, which develops an AI platform that empowers enterprises to create, deploy and manage AI models, and FloydHub, a start-up that has developed a cloud service for machine learning.

 

 

2. New AR and VR form factors and viewing models

Both augmented reality (AR) and virtual reality (VR) are increasingly being relied upon to create more immersive worlds where technology enables users to get more hands-on with virtual overlays and environments. In the case of AR, devices won’t remove us from our world, but will rather enable us to have objects appear as if they were really there.

 

2018 will witness more developers embracing AR, starting to make interesting applications moving beyond the world of gaming. One such example is a furniture company planning to make its full catalogue available in AR. Samsung NEXT has invested in companies like 8i, which provides a platform that enables true 3D (fully volumetric) video capture of people, allowing viewers to walk around as real humans in VR and AR.

 

8i’s Holo augmented reality application enables digital recreation of people and characters to be seen in the real world through a smartphone camera. (source: www.8i.com)

 

Head Mounted Displays (HMDs) will see foundational technology improvements in the quality of their displays, sensors, and materials. In 2018, there will be a lot of excitement in the industry in the form of M&A and investment activities. “For VR, we will see more standalone devices, falling between existing HMDs powered by mobile phones, and high-end hardware connected to powerful PCs. This will enable more people to experience the technology in new ways,” said Ajay Singh, Samsung NEXT Ventures.

 

 

3. Blockchain to look beyond cryptocurrencies

“In 2017, we saw blockchain technology increasingly applied to develop unbanked countries and communities,” said Raymond Liao from Ventures. “With underpinnings in peer-to-peer transactions, blockchain has the power to democratize transactions by removing the middleman and reducing the needless fees that so frequently hamstring those deprived of banking services.”

 

Cryptocurrency is the dominant killer application for blockchain up to now. However, we will see blockchain entrepreneurs and decentralization idealists, freshly financed by token sales, marching to either empower consumers against the one-sided data monetization paradigm, or break up enterprise data silos in, e.g., supply chain and healthcare industries.

Samsung’s focus on security will be an advantage for the company as far as blockchain is concerned. The elephant in the room around blockchain is that the entire technology is only as secure as the users’ keys. Samsung’s technology enables enterprise customers to be assured of a certain level of security in how their employees interact with their blockchain-based apps. Furthermore, Samsung NEXT includes in its portfolio companies like HYPR that provides enterprise with enhanced security and user experience using blockchain and Filament that secures Internet of Things (IoT) devices with their blockchain protocol.

 

 

4. IoT to put power in the hands of healthcare patients

Healthcare is an industry that is ripe for disruption. We will begin to see the power of IoT in healthcare with the emergence of inexpensive, continuous ways to capture and share our data, as well as derive insights that inform and empower patients. Moreover, wearable adoption will create a massive stream of real-time health data beyond the doctor’s office, which will significantly improve diagnosis, compliance and treatment. In short, a person’s trip to the doctor will start to look different – but for the right reasons.

 

Samsung is using IoT and AI to improve efficiency in healthcare. Samsung NEXT has invested in startups in this area, such as Glooko which helps people with diabetes by uploading the patient’s glucose data to the cloud to make it easier to access and analyse them. Another noteworthy investment in this space from Samsung NEXT is HealthifyMe, an Indian company whose mobile app connects AI-enabled human coaches with people seeking diet and exercise advice.

Samsung is uniquely positioned among tech companies in that it already has a significant business in healthcare. The company has solutions in wearables, hospital screens and tablets, and X-ray and MRI machines. By tying all these solutions together and cooperating with other partners, it will enable patients to manage their health from their own devices.

 

 

5. IoT breaks free from homes and enters the city

In the next couple of years, one should expect to see IoT transform urban environments thanks to the combination of learnings from smart homes and buildings, and the proliferation of 5G. Transformation will happen in waves, starting with innovation that requires fewer regulations. It is expected to impact the daily life of the community in meaningful ways, such as parking solutions, mapping, and bike share schemes.

 

Samsung NEXT already has various IoT investments including Stae for data-driven urban planning, and Swiftly that provides enterprise software to help transit agencies and cities improve urban mobility.

 

The company has its own IoT platform SmartThings—an acquisition that came through the Samsung NEXT team. The platform is connected to ARTIK for enterprises and HARMAN Ignite’s connected car platform, creating a comprehensive IoT ecosystem. Based on its progress on IoT, Samsung showcased its vision for ‘Samsung City 2020’ at this year’s CES, which is on its way to realization.

Source: Samsung

If you’re interested in a career change call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

The Impact on Jobs and Training from AI, AR and VR

The Impact on Jobs and Training from AI, AR and VR

Artificial intelligence, augmented reality and virtual reality are here to stay, but what impact will they have on jobs and training?

Artificial intelligence, augmented reality and virtual reality are here to stay, but what impact will they have on jobs and training?

A new study by Pew Research Center and Elon University’s Imagining the Internet Center asked more than 1,400 technologists, futurists and scholars whether well-prepared workers be able to keep up in the race with artificial intelligence tools, and what impact this development will have on market capitalism.

According to Elon University, most of the experts said they hope to see education and jobs-training ecosystems shift in the next decade to exploit liberal arts-based critical-thinking-driven curriculums; online courses and training amped up by artificial intelligence, augmented reality and virtual reality; and scaled-up apprenticeships and job mentoring.

However, some expressed fears that education will not meet new challenges or — even if it does — businesses will implement algorithm-driven solutions to replace people in many millions of jobs, leading to a widening of economic divides and capitalism undermining itself.

An analysis of the overall responses uncovered five key themes:

  1. The training ecosystem will evolve, with a mix of innovation in all education formats. For instance, more learning systems will migrate online and workers will be expected to learn continuously. Online courses will get a big boost from advances in augmented reality, virtual reality and artificial intelligence.

  2. Learners must cultivate 21st century skills, capabilities and attributes such as adaptability and critical thinking.

  3. New credentialing systems will arise as self-directed learning expands.

  4. Training and learning systems will not be up to the task of adapting to train or retrain people for the skills that will be most prized in the future.

  5. Technological forces will fundamentally change work and the economic landscape, with millions more people and millions fewer jobs in the future, raising questions about the future of capitalism.

“The vast majority of these experts wrestled with a foundational question: What is special about human beings that cannot be overtaken by robots and artificial intelligence?” said Lee Rainie, director of internet, science and technology research at Pew Research Center and co-author of the report. “They were focused on things like creativity, social and emotional intelligence, critical thinking, teamwork and the special attributes tied to leadership. Many made the case that the best educational programmes of the future will teach people how to be lifelong learners, on the assumption that no job requirements today are fixed and stable.”

Among the skills, capabilities and attributes the experts predicted will be of most future value were: adaptability, resilience, empathy, compassion, judgement and discernment, deliberation, conflict resolution, and the capacity to motivate, mobilise and innovate.

Jeff Jarvis, a professor at the City University of New York Graduate School of Journalism, highlighted the need for schools to take a new approach to educate the workforce of the future: “Schools today turn out widget makers who can make widgets all the same. They are built on producing single right answers rather than creative solutions. They are built on an outmoded attention economy: Pay us for 45 hours of your attention and we will certify your knowledge. I believe that many — not all — areas of instruction should shift to competency-based education in which the outcomes needed are made clear and students are given multiple paths to achieve those outcomes, and they are certified not based on tests and grades but instead on portfolios of their work demonstrating their knowledge.”

Tiffany Shlain, filmmaker and founder of the Webby Awards, added: “The skills needed to succeed in today’s world and the future are curiosity, creativity, taking initiative, multi-disciplinary thinking and empathy. These skills, interestingly, are the skills specific to human beings that machines and robots cannot do, and you can be taught to strengthen these skills through education.”


Source: Smartcities

 

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Built Robotics Self Driving Bulldozer

Built Robotics Self Driving Bulldozer

This is a self driving bulldozer, It's created by Built Robotics and it can dig holes by itself based on the specific location coordinates you send from the app.

This is a self driving bulldozer, It's created by Built Robotics and it can dig holes by itself based on the specific location coordinates you send from the app.


Built Robotics was founded with a question — What will construction look like in a generation? And what solutions can we develop to address a chronic labor shortage, productivity that has fallen by half since the 1960s, and an industry that, despite significant improvements, remains the most dangerous in America? These are tough questions, and it’s impossible to know the answers today. But we kept coming back to one realization: we need a new way to build.

"With that mission in mind, we came up with a simple idea. Let’s take the latest sensors from self-driving cars, retrofit them into proven equipment from the job site, and develop a suite of autonomous software designed specifically for the requirements of construction and earthmoving. And over the last two years, with a team of talented engineers, roboticists, and construction experts, that’s what we’ve done. It hasn’t been easy—in fact, no one has ever done what we’re doing—but with over $100 billion in earthmoving and grading services performed in the US each year, it feels like we’re onto something."

Source: BuiltRobotics

If you’re interested in a career in Artificial Intelligence or Robotics call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

In an AI-powered world, what are potential jobs of the future?

In an AI-powered world, what are potential jobs of the future?

With virtual assistants answering our emails and robots replacing humans on manufacturing assembly lines, mass unemployment due to widespread automation seems imminent. But it is easy to forget amid our growing unease that these systems are not “all-knowing” and fully competent.

As many of us have observed in our interactions with artificial intelligence, these systems perform repetitive, narrowly defined tasks very well but are quickly stymied when asked to go off script — often to great comical effect. As technological advances eliminate historic roles, previously unimaginable jobs will arise in the new economic reality. We combine these two ideas to map out potential new jobs that may arise in the highly automated economy of 2030.

With virtual assistants answering our emails and robots replacing humans on manufacturing assembly lines, mass unemployment due to widespread automation seems imminent. But it is easy to forget amid our growing unease that these systems are not “all-knowing” and fully competent.

As many of us have observed in our interactions with artificial intelligence, these systems perform repetitive, narrowly defined tasks very well but are quickly stymied when asked to go off script — often to great comical effect. As technological advances eliminate historic roles, previously unimaginable jobs will arise in the new economic reality. We combine these two ideas to map out potential new jobs that may arise in the highly automated economy of 2030.

Training, supervising and assisting robots

As robots take on increasingly complex functions, more humans will be needed to teach robots how to correctly accomplish these jobs. Human Intelligence Task (HIT) marketplaces like MTurk and Crowdflower already use humans to train AI to recognize objects in images or videos. New AI companies, like Lola, a personal travel service, are expanding HIT with specialized workers to train AI for complex tasks. 

Microsoft’s Tay bot, which quickly devolved into tweeting offensive and obscene comments after interacting with users on the internet, caused significant embarrassment to its creators. Given how quickly Tay went off the rails, it is easy to imagine how dangerous a bot trusted with maintaining our physical safety can become if it is fed the wrong sets of information or learns the wrong things from a poorly designed training set. Because the real world is ever-changing, AI must continuously train and improve, even after it achieves workable domain expertise, which ensures that expert human supervision is critical

Integrating jobs for people into the design of semi-autonomous systems has enabled some companies to achieve greater performance despite current technological limitations.

BestMile, a driverless vehicle deployed to transport luggage at airports, has successfully integrated human supervision into its design. Instead of engineering for every edge case in the complex and dangerous environment of an airport tarmac, the BestMile vehicle stops when it senses an obstacle in its path and waits for its human controller to decide what to do, enabling the company to enter the market much more quickly than competitors, which must refine their sensing algorithms to allow their robots to independently operate without incident.

Frontier explorers: Outward and upward

When Mars One, a Dutch startup whose goal is to send people to Mars, called for four volunteers to man their first Mars mission, more than 200,000 people applied.

Regardless of whether automation leads to increased poverty, automation’s threat of displacing people from their current jobs and in essence some part of their sense of self-worth could drive many to turn to an exploration of our final frontiers. An old saying jokes that there are more astronauts from Ohio than any other state because there is something about the state that makes people want to leave this planet.

One risk to human involvement in exploration is that exploration itself is also already being automated. Recently, relatively few of our space exploration missions have been manned. Humans have never left Earth’s orbit; all our exploration of other planets and the outer solar systems has been through unmanned probes. 

Artificial personality designers

As AI creeps into our world, we’ll start building more intimate relationships with it, and the technology will need to get to know us better, but some AI personalities may not suit some people. Moreover, different brands may want to be represented by distinct and well-defined personalities. The effective human-facing AI designer will, therefore, need to be mindful of subtle differences within AI to make AI interactions enjoyable and productive. This is where the Personality Designer or Personality Scientist comes in.

While Siri can tell a joke or two, humans crave more, so we will have to train our devices to provide for our emotional needs. In order to create a stellar user experience, AI personality designers or scientists are essential — to research and to build meaningful frameworks with which to design AI personalities. These people will be responsible for studying and preserving brand and culture, then injecting that information meaningfully into the things we love, like our cars, media, and electronics.

Chatbot builders are also hiring writers to write lines of dialogue and scripts to inject personality into their bots. Cortana, Microsoft’s chatbot, employs an editorial team of 22. Creative agencies specializing in writing these scripts have also found success in the last year.

Startups like Affectiva and Beyond Verbal are building technology that assists in recognizing and analyzing emotions, enabling AI to react and adjust its interactions with us to make the experience more enjoyable or efficient. A team from the Massachusetts Institute of Technology and Boston University is teaching robots to read human brain signals to determine when they have committed a fault without active human correction and monitoring. Google has also recently filed patents for robot personalities and has designed a system to store and distribute personalities to robots.

Human-as-a-Service

As automated systems become better at doing most jobs humans perform today, the jobs that remain monopolized by humans will be defined by one important characteristic: the fact that a human is doing them. Of these jobs, social interaction is one area where humans may continue to desire specifically the intangible, instinctive difference that only interactions and friendships with other real humans provide.

We are already seeing profound shifts toward “human-centric” jobs in markets that have experienced significant automation. A recent Deloitte analysis of the British workforce over the last two decades found massive growth in “caring” jobs: the number of nursing assistants increased by 909% and care workers by 168%.

The positive health effects of touch have been well documented and may provide valuable psychological boosts to users, patients, or clients. In San Francisco, companies are even offering professional cuddling services. Whereas today such services are stigmatized, “affection as a service” may one day be viewed on par with cognitive behavioral therapy or other treatments for mental health.

Likewise, friendship is a task that automated systems will not be able to fully fill. Certain activities that are generally combined with some level of social interaction, like eating a meal, are already seeing a trend towards “paid friends.” Thousands of Internet viewers are already paying to watch mukbang, or live video streams of people eating meals, a practice which originated in Korea to remedy the feeling of living alone. In the future, it is possible to imagine people whose entire jobs are to eat meals and engage in polite conversation with clients.

More practical social jobs in an automated economy may include professional networkers. Just as people have not trusted online services fully, it is likely that people will not trust more advanced matching algorithms and may defer to professional human networkers who can properly arrange introductions to the right people to help us reach our goals. Despite the proliferation of startup investing platforms, for example, we continue to see startups and VC firms engage placement agents in order to successfully fundraise.

Despite many claims to the contrary, designing a fully autonomous system is incredibly complex and remains far out of reach. For now, training a human is still much cheaper than developing robot replacement.

 

Source: Readwrite

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Blue Prism World London 2018

Blue Prism World London 2018

At Hanson Regan we champion innovation, and so we're always on the lookout for exciting new ways to maximise efficiency and output. We were therefore delighted to attend this year's most impactful Robotic Process Automation (RPA) event: Blue Prism World London 2018.

At Hanson Regan we champion innovation, and so we're always on the lookout for exciting new ways to maximise efficiency and output. We were therefore delighted to attend this year's most impactful Robotic Process Automation (RPA) event: Blue Prism World London 2018.

 

It's no secret that RPA is big news for companies looking to automate time consuming processes. For those unfamiliar with the term, RPA is a burgeoning technology that lets software robots replicate the actions of human workers for routine tasks such as data entry, altering the way organizations handle many of their key business and IT processes.

 

When RPA is used in conjunction with cognitive technologies, its capabilities can be expanded even further, extending automation to processes that would otherwise require judgement or perception. Thanks to natural language processing and ever more sophisticated chatbot technology and speech recognition, bots can now extract and structure speech audio, text or image, before passing this structured information to the next step of the process.

 

Industry Leading Automation

 

Blue Prism World offered attendees an opportunity to learn from and share with RPA industry leaders, practitioners, analysts and experts about real-world benefits and applications of RPA and Intelligent Automation.

 

With visionary keynote speakers such as Lynda Gratton, Professor of Management Practice at London Business School, Dario Zancetta, Head of Digital Technology and Innovation at Bank Of Ireland, and Vincent Vloemans from Global IT at Heineken, Blue Prism World presented a wealth of knowledge and a fascinating insight into the growing world of RPA and its potential to transform the way companies work.

 

The event was a fantastic opportunity to interact and network with people who are fascinated in the future of digitalization, and what that means for the future of work. From listening to Robert Kesterton, the Senior Manager of Business Improvement at Jaguar Land Rover, we were able to hear how utilising robots helped his company to deliver a better outcome, not only in terms of cost, but also in functionality. The use of robots helped save Jaguar Land Rover over 3000 hours worth of work and £0.5m of investment they didn't need to spend in their enterprise system, while delivering over £1.5m of revenue generation for the organisation in the process – a fantastic example of RPA technology delivering transformative innovation and efficiency.

 

We were particularly taken aback by the diversity of industries in attendance. From education and learning to finance and banking, all of the organisation present were doing different things, but they were all using robots to do them, highlighting the versatility of the technology on offer.

 

The Future Of Work

 

In her opening keynote presentation, Lynda Gratton explored the way that jobs overall are changing with the development of ever more sophisticated and employable technology. She suggested that this technology is at the heart of the future of the world of work, but that there is uncertainty as to what this will mean. Some argue that RPA will inevitably result in mass unemployment, while others envision a more positive future full of job creation and possibilities. The truth, according to Lynda, is that there will be as many jobs created as there will be destroyed.

 

At the same time, the jobs created won’t be the same as those that have come before them. Every single person, client and company you are advising will see their jobs transformed. And, in order to facilitate this transformation successfully, you have to retrain and re-skill your workforce and fundamentally change the context of work to encourage them to do that. Lynda reminded attendees that while automation frees people up to be more productive, it also frees them up to be more themselves.

 

Leading on from this, Lynda advocated for the promotion of women in work. Looking out across the audience, she highlighted how few women were in attendance, and this is reflective of the industry, but something that must change. She urged businesses to do all they can to encourage young women to take up the exciting and future-proof roles that were on offer.

 

Lifelong Learning

 

While we’re all focused on the parts of jobs that we are taking away from our employees, it’s vital to be just as invested in the parts that will take their place, otherwise our workforces can become anxious about the aspects of their role they are losing . Taking work off of people to allow them to do human work that requires them to be empathetic and creative can paradoxically make them worried and therefore less able to make empathetic choices.

 

In order to allay this anxiety, we must be clear that lifelong leaning is to be at the heart of everything we do. By replacing what you’re taking away from your employees with learning, you help them grow professionally and, crucially, help them to fulfil their potential as humans.

Implementation

 

Our time at Blue Prism World only further highlighted how efficient and accurate implementation of RPA technology is the key to its success. Here at Hanson Regan, we utilise RPA in our vetting systems, ensuring our candidates are up to scratch from the very start. By only providing us with proven candidates who can get the job done, there's less chance of hiccups and, therefore, saves us in unnecessary spending.

 

RPA: Replacing Humans?

 

While RPA presents fantastic opportunities for organisations across the board, due to common misconceptions and misuse of terms like 'Artificial Intelligence', there is still widespread wariness of utilising robots.

 

During Leslie Willcock's closing keynote presentation, the Professor of Technology Work and Globalisation, highlighted that organisations often under-perform, under-fund, under-resource but, crucially, under-aspire with their robotic process automation and cognitive automation objectives. This can be due to a number of factors, but usually when RPA myths are perpetuated, and misconceptions are accepted as truth, companies become wary and distrustful of the technology on offer.

 

Common RPA myths include:

 

  • RPA is only used to replace humans with technology, leading to layoffs
  • Business operations staff feel threatened by RPA
  • RPA replaces the IT department
  • RPA is driven only by cost savings
  • All RPA supplier tools scale easily and are enterprise-friendly
  • It's all about the technology and the software
  • RPA is being replaced by Cognitive Automation and AI

 

Dispel these myths, however, and Cognitive RPA has the potential to go beyond basic automation to provide business outcomes such as enhanced customer satisfaction, lower churn, and increased revenues.

 

Look out for future blog posts as delve deeper into our time at Blue Prism World, and what RPA can mean for your business.

If you’re interested in a career in Artificial Intelligence or Robotics call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Detroit: Become Human a window into the future?

Detroit: Become Human a window into the future?

DETROIT: Become Human is the latest high profile exclusive to come to the PS4, which portrays a futuristic world where human-like robots walk among us.

Quantic Dream’s new piece of interactive entertainment, like with Heavy Rain and Beyond, is full of gut wrenching decisions and multiple branching paths.

Set in the near future of 2038, it explores a world where human-like androids live among us and what it means to be human.

 

But the story, which evokes the civil rights movements in the androids’ struggle, may be closer to the reality we face in a few decades time than you’d think.

Dr David Hanson, creator of the world’s most advanced android Sophia, believes that by 2045 robots will share the same civil rights as humans.

The robotics expert made the comments in a brand new research paper titled 'Entering The Age of Living Intelligent Systems and Android Society'.

Dr Hanson believes that by 2029 android AI will match the intelligence of a one-year old human.

This will open the door for androids to assume menial positions in the military and emergency services just two years later in 2031.

And he feels by 2035 “androids will surpass nearly everything that humans can do”.

Dr Hanson expects a new generation of androids will be able to pass University exams, earn PHD’s and function with the intelligence levels of an 18 year old human.

He believes that these advanced machines could even go on to start a ‘Global Robotic Civil Rights Movement’.

The movement itself is expected to happen in 2038 and will be used to question the ethical treatment of AI machines within human society.

Dr Hanson’s research paper was commissioned alongside the release of Detroit: Become Human on PS4.

He said: “As depicted in Detroit: Become Human, lawmakers and corporations in the near future will attempt legal and ethical suppression of machine emotional maturity, so that people can feel safe.”

“Meanwhile artificial intelligence won't hold still. As people's demands for more generally intelligent machines push the complexity of AI forward, there will come a tipping point where robots will awaken and insist on their rights to exist, to live free.”

While Adam Williams, lead writer of Detroit: Become Human, added: “Detroit: Become Human is a work of fiction but Dr. Hanson’s research shows that life may soon imitate art.”

“His predictions are alarmingly close to the world depicted in the game. As the technology evolves, civil rights should be a natural consideration as androids become more prevalent in our society. I for one cannot wait to see how it plays out.”

Source: Express

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

SAP brings blockchain to the mainstream with their new supply chain initiative

SAP brings blockchain to the mainstream with their new supply chain initiative

Blockchain is one of the most recent technological buzzwords; following terms such as AI, machine learning and IoT.

At this stage, almost every business is intrigued by the concept. Some are even beginning to look for the means to begin their own adoption process.

IoT is now present in almost every newly made security camera on the planet. Machine Learning is being used to stand for customer service agents in the form of chatbots. AI is even being used by cybersecurity companies in the form of heuristic virus detection.

Blockchain is one of the most recent technological buzzwords; following terms such as AI, machine learning and IoT.

At this stage, almost every business is intrigued by the concept. Some are even beginning to look for the means to begin their own adoption process.

IoT is now present in almost every newly made security camera on the planet. Machine Learning is being used to stand for customer service agents in the form of chatbots. AI is even being used by cybersecurity companies in the form of heuristic virus detection.

Blockchain, however, is the new kid on the block. The real-life use cases for the technology have predominantly been linked to Bitcoin – a decentralised digital currency first started in 2009.

While business leaders have undoubtedly seen the extolled virtues of blockchain, the rise of the technology has been hampered by a lack of understanding as to what it can be used for. More recently, the technology took a hit from the crash of Bitcoin prices at the start of the year.

With this in mind, one of the world’s largest enterprise software corporations (SAP) integrating blockchain into its flagship supply chain packages is something of an upset.

So, what is blockchain, and how can it be used in business?

Blockchain is, at least in the initial sense, a digital record of cryptocurrency transactions – a form of accounting software called distributed ledger technology (DLT). While it can be accessed by various parties it is, most importantly, encrypted, verifiable and public.

The element of this accounting development that has caught the eye of business is its application in supply chains – a glint that SAP is monopolising on. Blockchain technology, in essence, allows for a greater level of transparency and traceability, which means that businesses can be absolutely sure as to where their products are coming from, where they are in the supply chain, and whether the product purchased is truly the product that is paid for.

In simple terms, it dramatically lowers risk where previously there was only trust and uncertainty.

In May 2018, SAP’s blockchain lead, Torsten Zube, revealed the company was applying blockchain to agricultural supply chains through its “Farm to Consumer” initiative. More recently at SAP’s SapphireNow 2018 conference, the company took a stand with its “intelligent enterprise” undertaking.

The organisation announced a new range of partnerships and products to “enable enterprises to become more intelligent, with expanded capabilities from advanced technologies such as conversational artificial intelligence, blockchain and analytics for use within its Leonardo package.

Speaking on the Blockchain integration strategy, Zube noted that: “Networking along the traditional lines of value chains will be replaced by sharing data governance, resources, processes and practices and lead to joint learning opportunities.

If enterprises can access the complete version of product history," he explained, "this could result in a shift from a central unilateral supplier-led production to a consumer demand-led supply organised by a consortium of peers.

Of particular interest, however, is SAP’s refusal to tie itself into any one blockchain provider early. Speaking on its blockchain service at the Sapphire conference, Gil Perez, Senior Vice President for Product and Innovation and Head of Digital Customer Initiatives at SAP, confirmed that blockchain technology is still being defined… noting that the company is not looking to commit until the market decides which way to go – minimising the impact on customers as the technology evolves.

With this in mind, there’s one thing that’s certain – Blockchain has joined IoT, machine learning and AI as concepts with a significant number of applications in the real world. With SAP now integrating it into its supply chain offering, the ledger service has taken the first step towards more widespread adoption.

Source: MA

If you’re interested in a career in SAP or IoT call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

AI and robotics forecast to generate 7.2m jobs, more than will be lost due to automation

AI and robotics forecast to generate 7.2m jobs, more than will be lost due to automation

Artificial intelligence is set to create more than 7m new UK jobs in healthcare, science and education by 2037, more than making up for the jobs lost in manufacturing and other sectors through automation, according to a report.

A report from PricewaterhouseCoopers argued that AI would create slightly more jobs (7.2m) than it displaced (7m) by boosting economic growth. The firm estimated about 20% of jobs would be automated over the next 20 years and no sector would be unaffected.

AI and related technologies such as robotics, drones and driverless vehicles would replace human workers in some areas, but also create many additional jobs as productivity and real incomes rise and new and better products were developed, PwC said.

Increasing automation in factories is a long-term trend but robots such as Pepper, created by Japan’s Softbank Robotics, are beginning to be used in shops, banks and social care, raising fears of widespread job losses.

However, PwC estimated that healthcare and social work would be the biggest winners from AI, where employment could increase by nearly 1 million on a net basis, equivalent to more than a fifth of existing jobs in the sector.

Professional, scientific and technical services, including law, accounting, architecture and advertising firms, are forecast to get the second-biggest boost, gaining nearly half a million jobs, while education is set to get almost 200,000 extra jobs.

John Hawksworth, the chief economist at PwC, said: “Healthcare is likely to see rising employment as it will be increasingly in demand as society becomes richer and the UK population ages. While some jobs may be displaced, many more are likely to be created as real incomes rise and patients still want the ‘human touch’ from doctors, nurses and other health and social care workers.

“On the other hand, as driverless vehicles roll out across the economy and factories and warehouses become increasingly automated, the manufacturing and transportation and storage sectors could see a reduction in employment levels.”

PwC estimated the manufacturing sector could lose a quarter of current jobs through automation by 2037, a total of nearly 700,000.

Transport and storage are estimated to lose 22% of jobs – nearly 400,000 – followed by public administration and defence, with a loss of almost 275,000 jobs, an 18% reduction. Clerical tasks in the public sector are likely to be replaced by algorithms while in the defence industry humans will increasingly be replaced by drones and other technologies.

 

London – home to more than a quarter of the UK’s professional, scientific and technical activities – will benefit the most from AI, with a 2.3% boost, or 138,000 extra jobs, the report said. The east Midlands is expected to see the biggest net reduction in jobs: 27,000, a 1.1% drop.

Source: The Guardian

 

If you’re interested in a career in Artificial Intelligence or Robotics call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Yamaha artificial intelligence transforms a dancer into a pianist

Yamaha artificial intelligence transforms a dancer into a pianist

Yamaha AI enabled a world-renowned dancer Kaiji Moriyama to control a piano by his movements. The performance was accompanied by the Berlin Philharmonic Orchestra Scharoun Ensemble.

Yamaha Corporation is excited to announce that Yamaha artificial intelligence (AI) technology enabled a world-renowned dancer Kaiji Moriyama to control a piano by his movements. The concert, held in Tokyo on November 22, 2017, was entitled "Mai Hi Ten Yu"' and was sponsored by Tokyo University of the Arts and Tokyo University of the Arts COI. Yamaha provided an original system, which can translate human movements into musical expression by using AI technology, as technical cooperation for the concert.

 

 

Drawing on the system provided by Yamaha, Moriyama gave a brilliant performance with synchronized beautiful piano sound. Moreover, the performance was accompanied by other leading players, the Berlin Philharmonic Orchestra Scharoun Ensemble.

The concert performed by the talented players with Yamaha technology showed "a form of expression that fuses body movements and music."

Yamaha believes this performance represents steady progress in the pursuit of new forms of artistic expression and will continue to develop this technology to further expand the possibilities for human expression.

Technology Overview

The AI adopted in the system, which is now under development, can identify a dancer's movement in real time by analyzing signals from four types of sensors attached to a dancer's body. This system has an original database that links melody and movements, and, with this database, the AI on the system creates suitable melody data (MIDI) from the dancer's movements instantly. The system then sends the MIDI data to a Yamaha Disklavier™ player piano, and it is translated into music.

To convert dance movements into musical expression, the Yamaha Disklavier™ is indispensable because it can reproduce a rich range of sounds with extreme accuracy through very slight changes in piano touch. Moreover, we use a special Disklavier in the concert which was configured based on Yamaha flagship model CFX concert grand piano to express fully and completely the performance of the talented dancer Moriyama.

 

Source: Yamaha

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

The user experience of blockchain applications has a long way to go

The user experience of blockchain applications has a long way to go

People keep asking what is the killer app of the blockchain. It may already exist, if so, its hiding in a terrible experience.

Conor Fallon from Hackermoon tried out a social media blockchain application today called Minds. It turned out to be really bad. There were usability pot holes at every corner and the product seemed very bloated, trying to replace Facebook, Instagram and Twitter all at once.

Conceptually this product could be cool; its a social network that allows you to earn tokens while you interact with people. Then you can use those social tokens to promote your own posts or to cash in. What could go wrong? Well, a lot apparently.

From the first screen it is immediately clear that little to no usability testing has been done on this product. I would wager not even internally. This is indicative of an immature UX strategy.

In defense of the guys at Minds, this is quite a common experience I have had with dApps.

Why will someone use your blockchain app?

What we often hear when blockchain entrepreneurs talk about their platform and why people will flock to it is the following;

1. Data ownership will drive adoption

This is incorrect, this strategy is not a strong reason to join something. Data breaches are a reason to stop doing something.

I hear many blockchain entrepreneurs say “People are sick of Facebook mining their data” which may be true, but its only really relevant to a point. The tide may be turning on Facebook, but will it turn on Google? Probably not, its extremely difficult to live in a world without using Google. So the real insight here is that — yes some people may be sick of the government or companies spying on them, but it usually comes from a place of privilege, where you can say I would prefer to pay than get an ad driven service for free.

A reason to join something is because it allows me to do something I couldn’t otherwise do. Snapchat has funny dog faces that look like fun so I want to try that out.

2. Monitory incentive will drive adoption

Don’t assume that monetary incentive will solve all of your problems. In many instances monetary incentives actually work against participants, watch this:

 

Why are sites like Wikipedia and Mumsnet so good? Well, its because people are intrinsically motivated to help other people. What would paying participants actually do to the content of these sites? Do you think it would get better? Don’t assume that adding money into the mix is going to solve all of your issues.

You are far better aligning your features to the intrinsic motivations of your audience. And if you power the right parts of the site with monetary incentive, that may be the killer app.

The peasant and the chicken — a parable

This is a story from the biography of Che Guevara — which I read as an impressionable 14 year old, so the memory is a bit hazy, but it goes something like this:

Castro’s rebels liberated a Cuban town from an oppressive regime. In the aftermath Che spotted a depressed chicken farmer. “Why are you so sad?” asked Che “We liberated you from your oppressors!?” .

“Your soldiers ate my chickens. Before when Batista (the oppressor) came through our village, his soldiers also ate my chickens. When the next soldiers come, they will also eat my chickens”

And so it goes, the developer communities can argue about ideological implementations of networks and power distributions, but it may all be for naught if the product doesn’t actually do anything meaningful for people.

A plea to blockchain app designers

Learn the basics of product strategy, its been around since we’ve been designing physical products and has been tested time and time again; learn the Kano model, do research about your audiences, before and during product development. Test economic incentives before rolling them out. Learn about people’s inherent irrational biases. All this will lead to the adoption we now require to build our crypto-utopia.

 

Source: Hackermoon

 

If you’re interested in a career in Big Data call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

The New Bot Economy

The New Bot Economy

The robust supply chain created by the car manufacturers over time means car manufacturing today is an efficient process of putting together parts sourced specialized vendors located around the world. This speeds up the time to make each car to less than a few hours -- but more importantly makes it fully repeatable across many car models. The advent of streamlined automotive supply lines and assembly was a key evolution driven by accelerating demand for cars in the booming 50s and 60s. Today, the RPA market is at a similar stage of evolution with accelerating market demand driving innovation.

Justin Watson, Partner at Deloitte, spoke of the Bot Economy at Automation Anywhere’s IMAGINE London event ( watch video). His analogy: similar to how the car industry has evolved to an easy assembly of standard parts, building your RPA using pre-built components is the way of the future.

We could not agree more.

The robust supply chain created by the car manufacturers over time means car manufacturing today is an efficient process of putting together parts sourced specialized vendors located around the world. This speeds up the time to make each car to less than a few hours -- but more importantly makes it fully repeatable across many car models. The advent of streamlined automotive supply lines and assembly was a key evolution driven by accelerating demand for cars in the booming 50s and 60s. Today, the RPA market is at a similar stage of evolution with accelerating market demand driving innovation.

Building RPA bots should be a chaining together exercise using "plug & play" bots from the Bot Store. 

Here is a cool example.

Pre-built bots with built-in value

Rather than announcing handshakes and integration-at-a-distance type partnerships (that leave the burden of getting the integration up and running on the customer), we have set out to create a true "plug & play" experience using the Bot Store.

Our partners (SIs, ISVs, Integration partners, etc.) list their bots on the Bot Store alongside Automation Anywhere bots. Each bot encapsulates best practices that reflect many years of combined expertise across RPA deployments.

The ecosystem of bots is evolving and growing quickly. Every Automation Anywhere customer - independent of the stage of their RPA journey, or specific affiliations/industries/processes – can leverage the built-in value of our ecosystem of – dare we say – perfected bots.

What exactly is the Automation Anywhere Bot Store?

It’s a true marketplace of pre-built bots that connects customers with bot creators. Easily search, assess, and select bots based on capabilities that are peer reviewed to evaluate usage experience.

The Bot Store will showcase the best bots across many business applications (Salesforce, SAP, Zendesk, ServiceNow, etc.) built both by Automation Anywhere and our valuable partners. That makes it fundamentally different from one-off partnerships or technology alliances based on published APIs and community libraries.

Benefit now

Bot Store is available now. Search and pick and choose bots of immediate value and relevance to your stage of the RPA journey.

Only a few weeks since launching Bot Store, and the response has been overwhelming. As we see the sheer volume of bot downloads it reinforces our belief that this will truly accelerate the race to ROI on RPA investments and enable enterprises to achieve their RPA goals in very short time.

 

Source: Automation Anywhere

 

If you’re interested in a career in Robotics call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Amazon AI predicts users’ musical tastes based on playback duration

Amazon AI predicts users’ musical tastes based on playback duration

AI engineers at Amazon have developed a novel way to learn users’ musical tastes and affinities.

AI engineers at Amazon have developed a novel way to learn users’ musical tastes and affinities — by using song playback duration as an “implicit recommendation system.” Bo Xiao, a machine learning scientist and lead author on the research, today described the method in a blog post ahead of a presentation at the Interspeech 2018 conference in Hyderabad, India.

Distinguishing between two similarly titled songs — for instance, Lionel’s Richie’s “Hello” and Adele’s “Hello” — can be a real challenge for voice assistants like Alexa. One way to resolve this is by having the assistant always choose the song that the user is expected to enjoy more, but as Xiao notes, that’s easier said than done. Users don’t often rate songs played back through Alexa and other voice assistants, and playback records don’t necessarily provide insight into musical taste.

“To be as useful as possible to customers, Alexa should be able to make educated guesses about the meanings of ambiguous utterances,” Xiao wrote. “We use machine learning to analyze playback duration data to infer song preference, and we use collaborative-filtering techniques to estimate how a particular customer might rate a song that he or she has never requested.”

The researchers found a solution in song duration. In a paper (“Play Duration based User-Entity Affinity Modeling in Spoken Dialog System”), Xiao and colleagues reasoned that people will cancel the playback of songs they dislike and let songs they enjoy continue to play, providing a dataset on which to train a machine learning-powered recommendation engine.

They divided songs into two categories: (1) songs that users played for less than 30 seconds and (2) songs that they played for longer than 30 seconds. Each was represented as a digit in a matrix grid — the first category was assigned a score of negative one, and the second a score of positive one.

To account for playback interruptions unrelated to musical preference, such as an interruption that caused a user to stop a song just as it was beginning, they added a weighting function. Songs received a greater weight if they were played back for 25 seconds instead of one second, for example, or for three minutes instead of two minutes.

When evaluated against users’ inferred affinity scores, the correlation was strong enough to demonstrate the model’s effectiveness, Xiao said. Furthermore, it implied that it’s good for more than music — in the future, the researchers plan to apply it to other content, such as audiobooks and videos.

Source: Venturebeat

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Chatbots: Redefining Customer Service for Growing Companies

Chatbots: Redefining Customer Service for Growing Companies

The workplace energy of a small and midsize technology business is unlike anything seen in a large enterprise. In the midst of a fast-paced, caffeine-fueled day, the freedom to make a difference at work is always present. This spirit is certainly not limited to growing companies, but it’s certainly easier to spot these opportunities and see the fruits of your labor in one.

The workplace energy of a small and midsize technology business is unlike anything seen in a large enterprise. In the midst of a fast-paced, caffeine-fueled day, the freedom to make a difference at work is always present. This spirit is certainly not limited to growing companies, but it’s certainly easier to spot these opportunities and see the fruits of your labor in one.

However, not every function may feel the love—especially customer service representatives. They are on the front line, fielding calls from customers who are often angry or disappointed. Whether the instructions were misread, passcodes were forgotten, or a new part is needed, reps can feel burned out after working on the same requests day after day.

It doesn’t have to be this way for customer service reps. By combining machine learning with the recent slew of chatbots, service organizations have a distinct opportunity to focus on the experience of each interaction from the perspective of the customer and the rep.

What Are Chatbots? And How Does Machine Learning Make Them Even Better?

Chatbots are computer programs that mimic human-to-human written and voice-enabled communication by using artificial intelligence. From self-initiating a series of tasks to holding a quasi-natural, two-way conversation, this technology is beginning to change how consumers and the brands they love engage with each other online, on the phone, and even through e-mail.

Suppose you wanted to know if today’s ballgame will be rained out. If a chatbot is not available, you would direct your browser to weather.com, for example, and then type in your zip code for the forecast. However, the use of a chatbot can turn this experience into a fast, more-meaningful interaction. For instance, the Weather Channel’s chatbot allows you to send a chat text asking for current conditions of a three-day forecast. And immediately, the chatbot replies.

Yes, this is a very simplistic example of a chatbot. But with artificial intelligence evolving into more sophisticated forms, such as machine learning, chatbots no longer need to be governed by just a series of preprogrammed rules, scripts, and prompts. Now, they can pull from the entire company’s collective expertise and experience and sift through it all to find the best-possible resolutions to a customer’s query.

Directing Interest in Machine Learning towards a More-Rewarding Service Experience

For years, technology firms have been primarily focused on setting a digital foundation with tools such as the cloud, Big Data, and analytics. However, some of that attention is now being pulled towards machine learning to turbocharge their business processes, decision-making, and customer interactions.

In fact, the Oxford Economics study, “The Transformation Imperative for Small and Midsize Technology Companies,” suggests a higher rate of investment in machine learning among technology firms than their peers in other industries. Although adoption numbers were still low at 6% for small and midsize technology companies in 2017, that same figure is projected to become more substantial as it nearly quadruples in 2019. Technology firms are leading the way, but companies in other industries should also consider how these tools can support their customer service function.

That said, chatbots present a clear opportunity for embracing machine learning in a way that is profoundly human, efficient, and meaningful without breaking the budget. They can help automate simple tasks, provide immediate service, and trigger specific, rules-based actions—whether a customer contacts the business through a messaging app, social media, phone, or e-mail—by learning how reps resolve frequently occurring queries. By mimicking simple, real-life conversations, chatbots can quickly become a low-cost way to offer around-the-clock customer assistance.

Chatbots can also transition the customer service organization from a point of customer interaction to a source of business intelligence and marketing opportunities. As the technology addresses customer issues and triggers processes, it captures every request, piece of feedback, and action and pushes it into a cloud-based ERP system that every business area can assess. Marketing and sales teams, for example, can use this information to find new opportunities for cross- or up-selling, new promotions, bundled offers, and even new services.

Investing in Chatbots Drives Untapped Value for Customer Service

With all the above said, it may seem that chatbots are a natural next step for small and midsize companies in all industries to expand their customer service capabilities. However, it can be intimidating to go through the process of producing them.

Here’s the good news: there’s more than one way to design a chatbot.

Businesses can choose to develop their own bot with a low-cost app, subscription-based cloud service, usage-based collaborative bot platform, or technology partner. But no matter the chosen path, the development process must be defined by specific capability needs, data to be accessed and captured, system integration requirements, and intended goals. It is very important to find a platform—such as recast.ai—that doesn’t limit API calls and allows the creation of unlimited bots within a few minutes or hours, rather than weeks and months.

When matched closely to the needs of customer service reps and customers, chatbots can deliver a potential benefit that is more valuable than the price tag itself. No one likes to be bogged down by repetitive, mundane tasks that provide no real value to the company’s growth. But if chatbots take on those activities, the technology may be the godsend that customer service reps need to handle more-challenging exceptions that allow them to learn and grow their skills and contribute directly to the bottom line.

Source: SAP

If you’re interested in a career in Artificial Intelligence or SAP call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

An approach to logical cognition and rationality in artificial intelligence

An approach to logical cognition and rationality in artificial intelligence

The ability to think logically is what distinguishes man from all other animals. Plato believed that we all are  born with something called a “rational soul”, or some essential property of all human beings that gave us the unique ability to think in logical and abstract ways. The result of possessing a rational soul, according to him, is the ability access some ‘higher plane’ of reality, which consists of so-called “forms”, or idealized representations of things by which our physical world and everything in it can be described, in terms of how well the physical objects conform to the ideal representations given by the forms. The duty of man is then to sculpt our physical world to better fit these forms, and thus move ourselves toward some “perfect” idealized state and effectively progressing humanity.

1. Philosophical Overview

The ability to think logically is what distinguishes man from all other animals. Plato believed that we all are  born with something called a “rational soul”, or some essential property of all human beings that gave us the unique ability to think in logical and abstract ways. The result of possessing a rational soul, according to him, is the ability access some ‘higher plane’ of reality, which consists of so-called “forms”, or idealized representations of things by which our physical world and everything in it can be described, in terms of how well the physical objects conform to the ideal representations given by the forms. The duty of man is then to sculpt our physical world to better fit these forms, and thus move ourselves toward some “perfect” idealized state and effectively progressing humanity.

While this idea is now generally considered outdated, the ideas of later philosophers contain many elements originally put forth by Plato in his description of forms, and are generally based on a more psychological approach, taking into account the cognitive processes that give rise to the idealized representations with which our experiences can be framed. The influence of Plato seems evident in the work of Kant and his description of “schemata”, or generalized models of things defined by logical relations. According to Kant, our minds have an inherit understanding of time and space, which gives rise to a sort of “mathematical intuition” that can be used to comprehend our patterns of perception. We can apply this intuition to build schematic structures, and effectively “plug in” perceptual information into these structures to better understand our world in a logical, ration way.

Considering the philosophical background of these ideas, I will propose, outline, and describe a computer-scientific method in which an intelligent agent may utilize the process of “logical framing” to classify, organize, comprehend its experience in a rational way, utilizing its observations to learn about the environment and improve its ability to make generalizations, predictions, and decisions about the world.

2. Technical Introduction

Our description of logical framing as a process of rational comprehension of perceptual experience by an intelligent agent begins with the definition of “templates” and “objects”. Templates are similar to forms and schemata, and objects are similar to perceptual patterns. While both are network-like structures of data, they differ in both content and function.

The nodes of an object represent the elements or parts of some external thing, and the links represent the relations between elements. Elements are defined by “descriptive properties” that exist along any number of dimensions (e.g. spatial, temporal, etc.), and relations are defined by “distinctive properties” that exist along dimensions shared by each of the elements to which the relation is connected. Descriptive properties can be thought of as values (e.g. spatial position) and distinctive properties as differences between values (e.g. spatial distance).

The nodes of a template, however, are descriptive functions over an object’s elements, and the links are distinctive functions over an object’s relations. Descriptive functions take the value of a descriptive property of a given element as input, and a distinctive function takes the value of a distinctive property of a given relation, which may also be understood as taking the values of a descriptive property from two connected elements.

3. Classification Process

Descriptive and distinctive functions return some indication of truth, either in the form of a binary value (i.e. 1 or 0) or in the form of a probabilistic value (i.e. between 1 and 0). The truth value returned by a function determines how well a given element or set of elements fit the logical definition provided by the template. Therefore the sum of truth values returned by the descriptive and distinctive functions of a template provide a fitness measurement for a given object. In more philosophical terms, the total truth value determines the degree in which a given object “participates” in the “essence” of a template.

The degree of participation of a given object with respect to a particular template can be used to decide whether how a given object is classified. A high degree of participation indicates that a given object is likely to be an “instance” of a particular template, and thus whether it can be classified as said template. This classification process requires some method of mapping a given object to a particular template, which consists of selecting a valid set of elements whose topology fits that of a particular template, as well as whose descriptive and distinctive properties fit the descriptive and distinctive functions of that template (i.e. result in an adequately high truth value).

4. Mapping Process

The complexity of objects and templates can scale to theoretical infinity. This means that the mapping process which produces a valid classification for a given object cannot be performed by considering the entirety of an object or template all at once. Therefore the solution is to consider each piece of an object or template in isolation, by starting at a single node called the “current node” and taking only its neighbors into consideration. The neighboring nodes of the template are then individually “filled” by the neighboring elements of the object, where each time a valid element is selected for a specific node, the current node is moved to that which was filled. This results in a depth-first search for the optimal mapping between a given object and a particular template, which then provides a method of measuring the degree of participation for an object with respect to a template, and eventually allows the successful classification of an object given the total truth values for a set of templates and selecting the highest one as the best-known option.

Each time all the neighbors of the current node are successfully filled, the previous node becomes the current node once again and the remaining neighbors are filled. This process occurs until the neighbors of the initial node are successfully filled, resulting in the calculation of the degree of participation for the current mapping. For each neighbor of the current node at any given time throughout the mapping process, the set of potential elements is found by first computing the truth values for the descriptive function associated with the neighbor, given each possible element.

This allows the set of potential elements to be reduced such that only the elements which satisfy the descriptive function remain. Then, each remaining element is passed to the distinctive function associated with the link between the current node and the neighbor, along with the element filling the current node which was previously selected. The set of potential elements is again reduced to only those elements which satisfy both the descriptive as well as the distinctive functions associated with the neighbor. In the event that a potential element is chosen and then later proves invalid, it is simply removed from the set of potential elements and another is selected for that neighbor. Through trial and error, the best-known mapping between an object and a template can be found and the degree of participation may be calculated.

5. Object Construction

When an intelligent agent receives perceptual input, an object is constructed that represents the information observed. However, a filtering step must occur before this construction process. Visual perception, for instance, requires first an edge detection step that produces a space of black and white cells, where the edges are highlighted by white cells and everything else is black.

Once the edges are found, a set of “base templates” are applied to the space. Base templates are unique from other templates in that their topological properties are such that one node in the template is connected to all other nodes, and no other connections exist. This is called a star topology. The “central node”, or the node connected to all others, is assigned to the element of a given position in the space, and the neighborhood around that position fill the other nodes in the template. The descriptive functions of a base template are restricted, and may only denote the presence or absence of an edge. The distinctive functions are fixed and denote the horizontal and vertical differences between the positions of elements. The base templates are moved along the space to classify the edge-patterns of each subspace, since each base template can only consider a small array of cells at a time.

The result is another space containing the abstracted objects derived from the classification of subspaces. This new space is smaller than the previous, and the templates to which its subspaces are mapped do not conform to the strict topological constraints that those at the previous level must comply. While the templates at this level do have restrictions on size, their topologies can take on a variety of forms, and their functions may vary in both the contents on which they act as well as the specific type of calculations they perform.

6. Object Abstraction

Templates are learned through experience and observation of objects. Sets of observed objects are clustered according to shared properties, as well as equivalent values of said properties. By grouping together like objects, the functions of a template may be composed in order to best describe the commonalities between objects in a particular group.

Template development follows a certain logic to determine how the functions ought to be composed. By following a set of “development rules”, a template is constructed by analyzing a set of grouped objects and extracting the attributes that describe them. The first development rule indicates the process by objects are grouped together. It states that the likelihood of two objects, A and B, belonging to the same group corresponds to the ratio between the number of equivalent properties between A and B and the number of shared properties between A and B, and the ratio between the number of shared properties between A and B and the average total number of properties of A and B.

Source: Signified Origins

 

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Understanding the ‘black box’ of artificial intelligence

Understanding the ‘black box’ of artificial intelligence

Artificial intelligence (AI) is playing an increasingly influential role in the modern world, powering more of the technology that impacts people’s daily lives.

And as the technology progresses and becomes ever-more complex and autonomous, it also becomes harder to understand, not just for the end users, but even for the people who built the platforms in the first place. This has raised concerns about a lack of accountability, hidden biases, and the ability to have clear visibility of what is driving life-changing decisions and courses of action.

Artificial intelligence (AI) is playing an increasingly influential role in the modern world, powering more of the technology that impacts people’s daily lives.

For digital marketers, it allows for more sophisticated online advertising, content creation, translations, email campaigns, web design and conversion optimization.

Outside the marketing industry, AI underpins some of the tools and sites that people use every day. It is behind the personal virtual assistants in the latest iPhone, Google Home, and Amazon Echo. It is used to recommend what films you watch on Netflix or what songs you listen to on Spotify, steers conversations you have with your favorite retailers, and powers self-driving cars and trucks that are set to become commonplace on roads around the world.

What is perhaps less widely known is that AI may also decide whether you are approved for a loan, determine the outcome of a bail application, identify threats to national security, or recommend a course of medical treatment.

And as the technology progresses and becomes ever-more complex and autonomous, it also becomes harder to understand, not just for the end users, but even for the people who built the platforms in the first place. This has raised concerns about a lack of accountability, hidden biases, and the ability to have clear visibility of what is driving life-changing decisions and courses of action.

These concerns are particularly prevalent when looking at the uses of deep learning, a form of artificial intelligence that requires minimal guidance, but ‘learns’ as it goes through identifying patterns from the data and information it can access. It uses neural networks and evolutionary algorithms which are essentially AI being built by AI, and can quickly resemble a tangled mess of connections that are nearly impossible for analysts to disassemble and fully understand.

What are neural networks?

The neural networks behind this new breed of deep machine learning are inspired by the connected neurons that make up a human brain. They use a series of interconnected units or processors and are adaptive systems that can adjust their outputs based on doing, essentially ‘learning’ by example as they go, adapting their behavior based on results.

This mimics evolution in the natural world, but at a much faster pace, with the algorithms quickly adapting to the patterns and results discovered to become increasingly accurate and valid.

Neural networks can identify patterns and trends among data that would be too difficult or time-consuming to deduce through human research, consequently creating outputs that would otherwise be too complex to manually code using traditional programming techniques.

This form of machine learning is very transparent on some levels, as it reflects the human behavior of trial and error, but at a speed and scale that wouldn’t otherwise be possible. But it is this speed and scale that makes it hard for the human brain to drill down into the expanding processes and keep track of the millions of micro-decisions that are powering the outputs.

 

Why transparency is important in artificial intelligence

What is black box AI? Put simply it is the idea that we can understand what goes in and what comes out, but don’t understand what goes on inside.

As AI is used to power more and more high profile and public facing services, such as self-driving cars, medical treatment, or defense weaponry, concerns have understandably been raised about what is going on under the hood. If people are willing to put their lives in the hands of AI-powered applications, then they would want to be sure that people understand how the technology works and how it makes decisions.

The same is true of business functions. If you’re a marketer entrusting AI to design and build your website or make important conversion optimisation decisions on your behalf, then wouldn’t you want to understand how it works? After all, design changes or multivariate tests can cost or make a business millions of dollars a year.

There have been calls to end the use of ‘black box’ algorithms in government because, without true clarity on how they work, there can be no accountability for decisions that affect the public. Fears have also been raised over the use of bias within decision-making algorithms, but with a perceived lack of a due process in place to prevent or protect against this.

There is also a strong case for making AI systems accountable and open to interrogation a legal as well as an ethical right. If machines are making life-changing decisions, then it stands to reason that those decisions should be able to be held up to the highest scrutiny.

report from AI Now, an AI institute at NYU, has warned that public agencies and government departments should rethink the AI tools they are using to ensure they are accountable and opaque when used for making far-reaching decisions that affect the lives of citizens.

So are all these fears over black box AI well founded, and what can be done to reassure users about what is going on behind the machines?

Work on a need to know basis

Many digital marketers and designers have an overall understanding of digital processes and systems, but not necessarily a deep understanding of how all of those things work. Many functions are powered by complex algorithms, code, programming or servers, and yet are still deemed trustworthy enough for investing large chunks of the marketing budget.

Take SEO, for example. How Google ranks search results is a notoriously secret formula. But agencies and professionals make careers out of their own interpretation of the rules of the game, trying to deliver what they think Google wants to be able to boost their rankings.

Similarly, Google AdWords and Facebook Ads have complex AI behind them, yet the inner workings of the auctions and ad positions are kept relatively quiet behind closed doors of the internet giants. While there is an argument that such companies should be more transparent when they wield such power, this doesn’t stop marketers from investing in the platforms. Not understanding the complexities does not stop people being able to optimize their campaigns, instead focusing on what goes in and monitoring the results to gain an understanding of what works best.

There is also an element of trust in these platforms that if you play by the rules that they do publicize and work to improve your campaigns, then their algorithms will do the right thing with your data and your advertising spend.

By choosing reputable machine learning platforms and constantly monitoring what works, you can feel confident with the technology, even if you don’t have a clear understanding of the complex workings behind them.

A lot of people will also put their trust in mass-market AI hardware, without expecting to understand what’s inside the black box. A layman who drives a regular car with no real understanding of how it changes gear, is no more in the dark than somebody who does not know how their self-driving car changes direction.

But of course, there is a key distinction between end users understanding something, and those who can hold it accountable having clarity over how and why an autonomous vehicle chose its path. Accident investigators, insurance assessors, road safety authorities, car maintenance companies, would all have a vested interest in understanding how and why driving decisions are made.

Deep learning networks could be made up of millions, billions or even trillions of connections. Therefore, auditing each connection in order to understand every decision would often be unmanageable, convoluted and potentially impossible to interpret. So, if attempting to address concerns over accountability and opacity of AI networks, then it’s important to prioritize what you need to know, what you want to understand and why.

Deep learning can be influenced by its teachers

As we’ve seen, deep learning is in some ways a high volume system of trial and error, testing out what works and what doesn’t, identifying measures of success, and building on the wins. But humans don’t evolve through trial and error alone; there’s also teaching passed down to help shape our actions.

Eat a bunch of wild mushrooms foraged in the forest, and you’ll find out the hard way which ones are poisonous. But luckily we’re able to learn from the errors of those who’ve gone before us, and we also make decisions on imparted as well as acquired knowledge. If you read a book on mushrooms or go out with an experienced forager then they can tell you which ones to avoid, so you don’t have to go through the gut-wrenching trial and error of eating the dangerous varieties.

Likewise, many neural networks allow information to be fed into them to help shape the decision-making process. This human influence should give a level of reassurance that the machines are not making all their decisions based only on black box experiences of which we don’t have a clear view.

To use AI-powered optimization platform Sentient Ascend as an example, it needs input from your CRO team in the shape of hypotheses and testing ideas in order to run successful tests.

In other words, Ascend uses your own building blocks and then uses evolutionary algorithms to identify the most powerful combinations and variations of those building blocks. You’re not giving free reign to an opaque AI tool to decide how to optimize your site, but instead harnessing the power and scale of AI in order to test more of your ideas, faster and more efficiently.

Focus on your key results

As we’ve seen, when it comes to cracking open the black box of AI tools in marketing, it raises the question of how many of your other marketing tools do you truly understand? For performance-based professionals, AI offers another tool for your belt, but the most important thing is whether it is delivering the results you need.

You should be measuring and testing your tools and strategies with AI tools, as with any other technology. This gives you visibility of what is working for your business.

By adopting CRO principles of testing, measuring and learning, this should give you the confidence that any business decisions you make are based on AI are solid and reliable – even if you couldn’t stand in front of your CEO and explain the nitty gritty of how each connected node under the hood worked together.

But despite the opaque reputation, many AI-powered platforms do allow users to peek inside the black box. Evolutionary algorithms which make their decisions based on trial and error can also be a little easier to understand for those without expert knowledge in machine learning processes.

Sentient Ascend users, for example, get access to comprehensive reporting, which includes graphs allowing you to hone in on the performance of each different design candidate. This allows full visibility to understand the ‘thought process’ behind the algorithms’ decisions to progress or remove certain variations.

Of course, scale can be a sticking point for those who want to deep dive into the inner workings of the software. The advantage of using AI to power your optimization testing is that it can run tests at a greater volume and scale than traditional, manually-built A/B testing tools. Therefore spending time to go back through and investigate every single variation could be very time-consuming. For example, what appears to be a relatively simple layout above the fold could easily have a million different variations to be tested.

The same applies to many other use cases for AI. If you’re using machine learning to analyze different datasets to be able to predict stock price changes, then going back in to check every data point assessed is not going to be a very efficient use of time. But it’s reassuring to know that the option is there to delve into the data should you need to audit performance or get a deeper understanding.

But this volume of data is why it’s important to prioritize what the KPIs are that are most important to you. And if you are measuring against your key business metrics and getting positive results, then the idea of taking a slight leap of faith as to how the black box tools deliver their results becomes much easier to swallow. Carry out due process of the tools you use, and you should be willing to accept accountability yourself for the results they deliver.

Making the machines more accountable

It’s the convoluted and complex nature of neural networks that can make them difficult to interrogate and understand. There are so many layers and a tangled web of connections that lead to outputs, that detangling them can seem a near-impossible task.

But many systems are now having some additional degrees of accountability built into them. MIT’s Regina Barzilay has worked on an AI system for mining pathology reports, but added in an additional step whereby the system pulls out and highlights snippets of text that represent a pattern discovered by the network.

Nvidia, which develops chips to power autonomous vehicles, has been working on a way of visually highlighting what the system focuses on to make its driving decisions.

While such steps will help offer reassurances and some clarity as to how deep learning networks arrive at decisions, many AI platforms are still some way off being able to offer a completely opaque view. It seems natural that in a world becoming increasingly reliant on AI, there will need to be an element of trust involved as to how it works, in the same way that there is inherent trust in humans who are responsible for decision making. Jury members are not quizzed on exactly what swayed their decision, nor are their brain activities scanned and recorded to check everything is functioning as planned. Yet jury decisions are still upheld by law on good faith.

With the evolving complexity of AI, it is almost inevitable that some of its inner workings will appear to be a black box to all but the very few who can comprehend how they work. But that doesn’t mean accountability is out of the question. Use the data you have available, identify the key information you need to know, and make the most of the reporting tools within your AI platforms, then the black box machines will not appear as mysterious as first feared.

Source: Sentient.ai

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Medrobotics Flex Robotic systen is changing how medical care is given

Medrobotics Flex Robotic systen is changing how medical care is given

This robotic system helps surgeons reach complex anatomical locations.

This robotic system helps surgeons reach complex anatomical locations, with Medrobotics Flex Robotic systen they are changing how medical care is given.

 

The Flex Robotic System offers a stable surgical platform, excellent instrument triangulation.

If you’re interested in a career in Robotics call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Can AI Write its Own Applications?

Can AI Write its Own Applications?

Early last year, a Microsoft research project dubbed DeepCoder announced that it had made progress creating AI that could write its own programs

Early last year, a Microsoft research project dubbed DeepCoder announced that it had made progress creating AI that could write its own programs.

Such a feat has long captured the imagination of technology optimists and pessimists alike, who might consider software that creates its own software as the next paradigm in technology – or perhaps the direct route to building the evil Skynet.

As with most machine learning or deep learning approaches that make up the bulk of today’s AI, DeepCoder was creating code that it based on large numbers of examples of existing code that researchers used to train the system.

The result: software that ended up assembling bits of human-created programs, a feat Wired Magazine referred to as ‘looting other software.’

And yet, in spite of DeepCoder’s PR faux pas, the idea of software smart enough to create its own applications remains an area of active research, as well as an exciting prospect for the digital world at large.

The Notion of ‘Intent-Based Programming’

What do we really want when we say we want software smart enough to write applications for us? The answer: we want to be able to express our intent for the application and let the software take it from there.

The phrase ‘intent-based’ comes from the emerging product category ‘intent-based networking,’ an AI-based approach to configuring networks that divines the business intent of the administrator.

An intent-based networking system (IBNS) enables admins to define a high-level business policy. The IBNS then verifies that it can execute the policy, manipulates network resources to create the desired state, and monitors the state of the network to ensure that it is enforcing all policies on an ongoing basis, taking corrective action when necessary.

Intent-based programming, by extension, takes the concept of intent-based networking and extends it to any type of application a user might desire.

For example, you could ask Alexa to build you an application that, say, kept track of your album collection. It would code it for you automatically and present the finished, working application to you, ready for use.

What Might Be Going on Under the Covers

In the simple Alexa example above, the obvious approach for the AI to take would be to find an application similar to the one the user requested, and then make tweaks to it as necessary, or perhaps assemble the application out of pre-built components.

In other words, Alexa would be following a similar technique as DeepCoder, borrowing code from other places and using those bits and pieces as templates to meet a current need.

But assembling templates or other human-written code isn’t what we really mean by AI-written software, is it? What we’re really looking for is the ability to create applications that are truly novel, and thus most of their inner workings don’t already exist in some other form.

In other words, can AI be creative when it creates software? Can it create truly novel application behavior, behavior that no human has coded before?

5GLs to the Rescue

Using software that can take the intent of the user and generate the desired application has been a wish-list item for computer science researchers for decades. In fact, the Fifth Generation Language (5GL) movement from the 1980s sought to “make the computer solve a given problem without the programmer,” according to Wikipedia.

The idea with 5GLs was for users to express their intent in terms of constraints, which the software would then translate into working applications. This idea appeared promising but turned out to have limited applicability.

The sorts of problems that specifying constraints alone could solve turned out to be a rather small set: mostly mathematical optimization tasks that would seek a mathematical solution to a set of mathematical expressions that represented the constraints.

The challenge facing the greater goal of creating arbitrary applications was that 5GLs weren’t able to express algorithms – the sequence of steps programmers specify when they write code by hand.

As a result, 5GLs didn’t really go anywhere, although they did lead to an explosion of declarative, domain-specific languages like SQL and HTML – languages that separate the representation of the intent of users from the underlying software.

But make no mistake: expressing your intent in a declarative language is very different from software that can create its own applications. Writing SELECT * FROM ALBUMLIST is a far cry from ‘Alexa, build me an app that keeps track of my albums.’

The missing piece to the 5GL puzzle, of course, is AI.

A Question of Algorithms

In the 1980s we had no way for software to create its own algorithms – but with today’s AI, perhaps we do. The simple optimization tasks that 5GLs could handle have grown into full-fledged automated optimization for computer algebra systems, which would qualify as computer-generated algorithms. However, these are still not general purpose.

There are also research projects like Google AutoML, which creates machine learning-generated neural network architectures. You can think of a neural network architecture as a type of application, albeit one that uses AI. So in this case, we have AI that is smart enough to create AI-based applications.

AutoML and similar projects are quite promising to be sure. However, not only have we not moved much closer to Skynet, but such efforts also fall well short of the intent-based programming goal I described earlier.

The Context for Human Intent

Fundamentally, AutoML and intent-based programming are going in different directions, because they have different contexts for how users would express their intent. The Alexa example above is unequivocally human-centric, as it leverages Alexa’s natural language processing and other contextual skills to provide a consumer-oriented user experience.

In the case of AutoML (or any machine learning or deep learning effort, for that matter), engineers must express success conditions (i.e., their intent) in a formal way.

If you want to teach AI to recognize cat photos, for example, this formal success condition is trivial: of a data set containing a million images, these 100,000 have cats in them. Either the software gets it right or it doesn’t, and it learns from every attempt.

What, then, is the formal success condition for ‘the album tracking application I was looking for’? Answering such a question in the general case is still beyond our abilities.

Today’s State of the Art

Today’s AI cannot create an algorithm that satisfies a human’s intent in all but the simplest cases. What we do have is AI that can divine insights from patterns in large data sets.

If we can boil down algorithms into such data sets, then we can make some headway. For example, if an AI-based application has access to a vast number of human-created workflows, then it can make a pretty good guess as to the next step in a workflow you might be working on at the moment.

In other words, we now have autocomplete for algorithms – what we call ‘next best action.’ We may still have to give our software some idea of how we want an application to behave, but AI can assist us in figuring out the steps that make it work.

The Intellyx Take

AI that can provide suggestions for the next best action but cannot build an entire algorithm from scratch qualifies more as Augmented Intelligence than ArtificialIntelligence.

When we are looking for software that can satisfy human intent, as opposed to automatically solving a problem on its own, we’re actually looking for this sort of collaboration. After all, we still want a hand in building the application – we just want the process to be dead simple.

It’s no surprise, therefore, that the burgeoning low-code/no-code platform market is rapidly innovating in this direction.

Today’s low-code/no-code platforms support sophisticated, domain-specific declarative languages that give people the ability to express their intent in English-like expressions (or other human languages of choice).

They also have the ability to represent apps and app components as templates, affording users the ability to assemble pieces of applications with ‘drag and drop’ simplicity.

And now, many low-code/no-code platform vendors are adding AI to the mix, augmenting the abilities of application creators to specify the algorithms they intend their applications to follow.

Someday, perhaps, we’ll simply pick up our mic and tell such platforms what we want and they’ll build it automatically. We’re not quite there yet, but we’re closer than we’ve ever been with today’s low-code/no-code platforms – and innovation is proceeding at a blistering pace. It won’t be long now.

Source: IoT.sys

If you’re interested in a career in Artificial Intelligence call us on +44 0208 290 4656

or drop us an email info@hansonregan.com

Essentials of Deep Learning: Introduction to Long Short Term Memory

Essentials of Deep Learning: Introduction to Long Short Term Memory

Introduction

Sequence prediction problems have been around for a long time. They are considered as one of the hardest problems to solve in the data science industry. These include a wide range of problems; from predicting sales to finding patterns in stock markets’ data, from understanding movie plots to recognizing your way of speech, from language translations to predicting your next word on your iPhone’s keyboard.

Introduction
Sequence prediction problems have been around for a long time. They are considered as one of the hardest problems to solve in the data science industry. These include a wide range of problems; from predicting sales to finding patterns in stock markets’ data, from understanding movie plots to recognizing your way of speech, from language translations to predicting your next word on your iPhone’s keyboard.
With the recent breakthroughs that have been happening in data science, it is found that for almost all of these sequence prediction problems, Long short Term Memory networks, a.k.a LSTMs have been observed as the most effective solution.

LSTMs have an edge over conventional feed-forward neural networks and RNN in many ways. This is because of their property of selectively remembering patterns for long durations of time.  The purpose of this article is to explain LSTM and enable you to use it in real life problems.  Let’s have a look!
Note: To go through the article, you must have basic knowledge of neural networks and how Keras (a deep learning library) works. You can refer the mentioned articles to understand these concepts:
•    Understanding Neural Network From Scratch
•    Fundamentals of Deep Learning – Introduction to Recurrent Neural Networks
•    Tutorial: Optimizing Neural Networks using Keras (with Image recognition case study)
 
Table of Contents
1.    Flashback: A look into Recurrent Neural Networks (RNN)
2.    Limitations of RNNs
3.    Improvement over RNN : Long Short Term Memory (LSTM)

4.    Architecture of LSTM
1.    Forget Gate
2.    Input Gate
3.    Output Gate
5.    Text generation using LSTMs.
 
1. Flashback: A look into Recurrent Neural Networks (RNN)
Take an example of sequential data, which can be the stock market’s data for a particular stock. A simple machine learning model or an Artificial Neural Network may learn to predict the stock prices based on a number of features: the volume of the stock, the opening value etc. While the price of the stock depends on these features, it is also largely dependent on the stock values in the previous days. In fact for a trader, these values in the previous days (or the trend) is one major deciding factor for predictions.
In the conventional feed-forward neural networks, all test cases are considered to be independent. That is when fitting the model for a particular day, there is no consideration for the stock prices on the previous days.
This dependency on time is achieved via Recurrent Neural Networks. A typical RNN looks like:

This may be intimidating at first sight, but once unfolded, it looks a lot simpler:

2. Limitations of RNNs

Recurrent Neural Networks work just fine when we are dealing with short-term dependencies. That is when applied to problems like:

RNNs turn out to be quite effective. This is because this problem has nothing to do with the context of the statement. The RNN need not remember what was said before this, or what was its meaning, all they need to know is that in most cases the sky is blue. Thus the prediction would be:

However, vanilla RNNs fail to understand the context behind an input. Something that was said long before, cannot be recalled when making predictions in the present. Let’s understand this as an example:

 

Here, we can understand that since the author has worked in Spain for 20 years, it is very likely that he may possess a good command over Spanish. But, to make a proper prediction, the RNN needs to remember this context. The relevant information may be separated from the point where it is needed, by a huge load of irrelevant data. This is where a Recurrent Neural Network fails!

The reason behind this is the problem of Vanishing Gradient. In order to understand this, you’ll need to have some knowledge about how a feed-forward neural network learns. We know that for a conventional feed-forward neural network, the weight updating that is applied on a particular layer is a multiple of the learning rate, the error term from the previous layer and the input to that layer. Thus, the error term for a particular layer is somewhere a product of all previous layers’ errors. When dealing with activation functions like the sigmoid function, the small values of its derivatives (occurring in the error function) gets multiplied multiple times as we move towards the starting layers. As a result of this, the gradient almost vanishes as we move towards the starting layers, and it becomes difficult to train these layers.

A similar case is observed in Recurrent Neural Networks. RNN remembers things for just small durations of time, i.e. if we need the information after a small time it may be reproducible, but once a lot of words are fed in, this information gets lost somewhere. This issue can be resolved by applying a slightly tweaked version of RNNs – the Long Short-Term Memory Networks.

   

3. Improvement over RNN: LSTM (Long Short-Term Memory) Networks

When we arrange our calendar for the day, we prioritize our appointments right? If in case we need to make some space for anything important we know which meeting could be canceled to accommodate a possible meeting.

Turns out that an RNN doesn’t do so. In order to add a new information, it transforms the existing information completely by applying a function. Because of this, the entire information is modified, on the whole, i. e. there is no consideration for ‘important’ information and ‘not so important’ information.

LSTMs on the other hand, make small modifications to the information by multiplications and additions. With LSTMs, the information flows through a mechanism known as cell states. This way, LSTMs can selectively remember or forget things. The information at a particular cell state has three different dependencies.

We’ll visualize this with an example. Let’s take the example of predicting stock prices for a particular stock. The stock price of today will depend upon:

  1. The trend that the stock has been following in the previous days, maybe a downtrend or an uptrend.
  2. The price of the stock on the previous day, because many traders compare the stock’s previous day price before buying it.
  3. The factors that can affect the price of the stock for today. This can be a new company policy that is being criticized widely, or a drop in the company’s profit, or maybe an unexpected change in the senior leadership of the company.

These dependencies can be generalized to any problem as:

  1. The previous cell state (i.e. the information that was present in the memory after the previous time step)
  2. The previous hidden state (i.e. this is the same as the output of the previous cell)
  3. The input at the current time step (i.e. the new information that is being fed in at that moment)

Another important feature of LSTM is its analogy with conveyor belts!

That’s right!

Industries use them to move products around for different processes. LSTMs use this mechanism to move information around.

We may have some addition, modification or removal of information as it flows through the different layers, just like a product may be molded, painted or packed while it is on a conveyor belt.

The following diagram explains the close relationship of LSTMs and conveyor belts.

Source
 

Although this diagram is not even close to the actual architecture of an LSTM, it solves our purpose for now.

Just because of this property of LSTMs, where they do not manipulate the entire information but rather modify them slightly, they are able to forget and remember things selectively. How do they do so, is what we are going to learn in the next section?

 

4. Architecture of LSTMs

The functioning of LSTM can be visualized by understanding the functioning of a news channel’s team covering a murder story. Now, a news story is built around facts, evidence and statements of many people. Whenever a new event occurs you take either of the three steps.

Let’s say, we were assuming that the murder was done by ‘poisoning’ the victim, but the autopsy report that just came in said that the cause of death was ‘an impact on the head’. Being a part of this news team what do you do? You immediately forget the previous cause of death and all stories that were woven around this fact.

What, if an entirely new suspect is introduced into the picture. A person who had grudges with the victim and could be the murderer? You input this information into your news feed, right?

Now all these broken pieces of information cannot be served on mainstream media. So, after a certain time interval, you need to summarize this information and output the relevant things to your audience. Maybe in the form of “XYZ turns out to be the prime suspect.”.

Now let’s get into the details of the architecture of LSTM network:

 

 

Source

Now, this is nowhere close to the simplified version which we saw before, but let me walk you through it. A typical LSTM network is comprised of different memory blocks called cells
(the rectangles that we see in the image) There are two states that are being transferred to the next cell; the cell state and the hidden state. The memory blocks are responsible for remembering things and manipulations to this memory is done through three major mechanisms, called gates. Each of them is being discussed below.

4.1 Forget Gate

Taking the example of a text prediction problem. Let’s assume an LSTM is fed in, the following sentence:

 

As soon as the first full stop after “person” is encountered, the forget gate realizes that there may be a change of context in the next sentence. As a result of this, the subject of the sentence is forgotten and the place for the subject is vacated. And when we start speaking about “Dan” this position of the subject is allocated to “Dan”. This process of forgetting the subject is brought about by the forget gate.

A forget gate is responsible for removing information from the cell state. The information that is no longer required for the LSTM to understand things or the information that is of less importance is removed via multiplication of a filter. This is required for optimizing the performance of the LSTM network.

This gate takes in two inputs; h_t-1 and x_t.

h_t-1 is the hidden state from the previous cell or the output of the previous cell and x_t is the input at that particular time step. The given inputs are multiplied by the weight matrices and a bias is added. Following this, the sigmoid function is applied to this value. The sigmoid function outputs a vector, with values ranging from 0 to 1, corresponding to each number in the cell state. Basically, the sigmoid function is responsible for deciding which values to keep and which to discard. If a ‘0’ is output for a particular value in the cell state, it means that the forget gate wants the cell state to forget that piece of information completely. Similarly, a ‘1’ means that the forget gate wants to remember that entire piece of information. This vector output from the sigmoid function is multiplied to the cell state.

 

4.2 Input Gate

Okay, let’s take another example where the LSTM is analyzing a sentence:

Now the important information here is that “Bob” knows swimming and that he has served the Navy for four years. This can be added to the cell state, however, the fact that he told all this over the phone is a less important fact and can be ignored. This process of adding some new information can be done via the input gate.

Here is its structure:

 

The input gate is responsible for the addition of information to the cell state. This addition of information is basically three-step process as seen from the diagram above.

  1. Regulating what values need to be added to the cell state by involving a sigmoid function. This is basically very similar to the forget gate and acts as a filter for all the information from h_t-1 and x_t.
  2. Creating a vector containing all possible values that can be added (as perceived from h_t-1 and x_t) to the cell state. This is done using the tanh function, which outputs values from -1 to +1.  
  3. Multiplying the value of the regulatory filter (the sigmoid gate) to the created vector (the tanh function) and then adding this useful information to the cell state via addition operation.

 

Once this three-step process is done with, we ensure that only that information is added to the cell state that is important and is not redundant.

 

4.3 Output Gate

Not all information that runs along the cell state, is fit for being output at a certain time. We’ll visualize this with an example:

In this phrase, there could be a number of options for the empty space. But we know that the current input of ‘brave’, is an adjective that is used to describe a noun. Thus, whatever word follows, has a strong tendency of being a noun. And thus, Bob could be an apt output.

This job of selecting useful information from the current cell state and showing it out as an output is done via the output gate. Here is its structure:

 

The functioning of an output gate can again be broken down to three steps:

  1. Creating a vector after applying tanh function to the cell state, thereby scaling the values to the range -1 to +1.
  2. Making a filter using the values of h_t-1 and x_t, such that it can regulate the values that need to be output from the vector created above. This filter again employs a sigmoid function.
  3. Multiplying the value of this regulatory filter to the vector created in step 1, and sending it out as a output and also to the hidden state of the next cell.

The filter in the above example will make sure that it diminishes all other values but ‘Bob’. Thus the filter needs to be built on the input and hidden state values and be applied on the cell state vector.

 

5. Text generation using LSTMs

We have had enough of theoretical concepts and functioning of LSTMs. Now we would be trying to build a model that can predict some number of characters after the original text of Macbeth. Most of the classical texts are no longer protected under copyright and can be found here. An updated version of the .txt file can be found here.

We will use the library Keras, which is a high-level API for neural networks and works on top of TensorFlow or Theano. So make sure that before diving into this code you have Keras installed and functional.

Okay, so let’s generate some text!

 

  • Importing dependencies

# Importing dependencies numpy and keras

import numpy

from keras.models import Sequential

from keras.layers import Dense

from keras.layers import Dropout

from keras.layers import LSTM

from keras.utils import np_utils

We import all the required dependencies and this is pretty much self-explanatory.

  • Loading text file and creating character to integer mappings

# load text

filename = "/macbeth.txt"

 

text = (open(filename).read()).lower()

 

# mapping characters with integers

unique_chars = sorted(list(set(text)))

 

char_to_int = {}

int_to_char = {}

 

for i, c in enumerate (unique_chars):

    char_to_int.update({c: i})

    int_to_char.update({i: c})

The text file is open, and all characters are converted to lowercase letters. In order to facilitate the following steps, we would be mapping each character to a respective number. This is done to make the computation part of the LSTM easier.

  • Preparing dataset

# preparing input and output dataset

X = []

Y = []

 

for i in range(0, len(text) - 50, 1):

    sequence = text[i:i + 50]

    label =text[i + 50]

    X.append([char_to_int[char] for char in sequence])

    Y.append(char_to_int[label])

Data is prepared in a format such that if we want the LSTM to predict the ‘O’ in ‘HELLO’  we would feed in [‘H’, ‘E‘ , ‘L ‘ , ‘L‘ ] as the input and [‘O’] as the expected output. Similarly, here we fix the length of the sequence that we want (set to 50 in the example) and then save the encodings of the first 49 characters in X and the expected output i.e. the 50th character in Y.

  • Reshaping of X

# reshaping, normalizing and one hot encoding

X_modified = numpy.reshape(X, (len(X), 50, 1))

X_modified = X_modified / float(len(unique_chars))

Y_modified = np_utils.to_categorical(Y)

 

A LSTM network expects the input to be in the form [samples, time steps, features] where samples is the number of data points we have, time steps is the number of time-dependent steps that are there in a single data point, features refers to the number of variables we have for the corresponding true value in Y. We then scale the values in X_modified between 0 to 1 and one hot encode our true values in Y_modified.

 

  • Defining the LSTM model

# defining the LSTM model

model = Sequential()

model.add(LSTM(300, input_shape=(X_modified.shape[1], X_modified.shape[2]), return_sequences=True))

model.add(Dropout(0.2))

model.add(LSTM(300))

model.add(Dropout(0.2))

model.add(Dense(Y_modified.shape[1], activation='softmax'))

 

model.compile(loss='categorical_crossentropy', optimizer='adam')

A sequential model which is a linear stack of layers is used. The first layer is an LSTM layer with 300 memory units and it returns sequences. This is done to ensure that the next LSTM layer receives sequences and not just randomly scattered data. A dropout layer is applied after each LSTM layer to avoid overfitting of the model. Finally, we have the last layer as a fully connected layer with a ‘softmax’ activation and neurons equal to the number of unique characters, because we need to output one hot encoded result.

  • Fitting the model and generating characters

# fitting the model

model.fit(X_modified, Y_modified, epochs=1, batch_size=30)

 

# picking a random seed

start_index = numpy.random.randint(0, len(X)-1)

new_string = X[start_index]

 

# generating characters

for i in range(50):

    x = numpy.reshape(new_string, (1, len(new_string), 1))

    x = x / float(len(unique_chars))

 

    #predicting

    pred_index = numpy.argmax(model.predict(x, verbose=0))

    char_out = int_to_char[pred_index]

    seq_in = [int_to_char[value] for value in new_string]

    print(char_out)

 

    new_string.append(pred_index)

    new_string = new_string[1:len(new_string)]

The model is fit over 100 epochs, with a batch size of 30. We then fix a random seed (for easy reproducibility) and start generating characters. The prediction from the model gives out the character encoding of the predicted character, it is then decoded back to the character value and appended to the pattern.  

This is how the output of the network would look like

Eventually, after enough training epochs, it will give better and better results over the time. This is how you would use LSTM to solve a sequence prediction task.

 

End Notes

LSTMs are a very promising solution to sequence and time series related problems. However, the one disadvantage that I find about them, is the difficulty in training them. A lot of time and system resources go into training even a simple model. But that is just a hardware constraint! I hope I was successful in giving you a basic understanding of these networks. 

Source: Analyticsvidhya

3.  A Gentle Introduction to Exploding Gradients in Neural Networks

Exploding gradients are a problem where large error gradients accumulate and result in very large updates to neural network model weights during training.

This has the effect of your model being unstable and unable to learn from your training data.

In this post, you will discover the problem of exploding gradients with deep artificial neural networks.

After completing this post, you will know:

  • What exploding gradients are and the problems they cause during training.
  • How to know whether you may have exploding gradients with your network model.
  • How you can fix the exploding gradient problem with your network.

Let’s get started.

What Are Exploding Gradients?

An error gradient is the direction and magnitude calculated during the training of a neural network that is used to update the network weights in the right direction and by the right amount.

In deep networks or recurrent neural networks, error gradients can accumulate during an update and result in very large gradients. These in turn result in large updates to the network weights, and in turn, an unstable network. At an extreme, the values of weights can become so large as to overflow and result in NaN values.

The explosion occurs through exponential growth by repeatedly multiplying gradients through the network layers that have values larger than 1.0.

What Is the Problem with Exploding Gradients?

In deep multilayer Perceptron networks, exploding gradients can result in an unstable network that at best cannot learn from the training data and at worst results in NaN weight values that can no longer be updated.

… exploding gradients can make learning unstable.

— Page 282, Deep Learning, 2016.

In recurrent neural networks, exploding gradients can result in an unstable network that is unable to learn from training data and at best a network that cannot learn over long input sequences of data.

… the exploding gradients problem refers to the large increase in the norm of the gradient during training. Such events are due to the explosion of the long term components

— On the difficulty of training recurrent neural networks, 2013.

How do You Know if You Have Exploding Gradients?

There are some subtle signs that you may be suffering from exploding gradients during the training of your network, such as:

  • The model is unable to get traction on your training data (e.g. poor loss).
  • The model is unstable, resulting in large changes in loss from update to update.
  • The model loss goes to NaN during training.

If you have these types of problems, you can dig deeper to see if you have a problem with exploding gradients.

There are some less subtle signs that you can use to confirm that you have exploding gradients.

  • The model weights quickly become very large during training.
  • The model weights go to NaN values during training.
  • The error gradient values are consistently above 1.0 for each node and layer during training.

How to Fix Exploding Gradients?

There are many approaches to addressing exploding gradients; this section lists some best practice approaches that you can use.

1. Re-Design the Network Model

In deep neural networks, exploding gradients may be addressed by redesigning the network to have fewer layers.

There may also be some benefit in using a smaller batch size while training the network.

In recurrent neural networks, updating across fewer prior time steps during training, called truncated Backpropagation through time, may reduce the exploding gradient problem.

2. Use Rectified Linear Activation

In deep multilayer Perceptron neural networks, gradient exploding can occur given the choice of activation function, such as the historically popular sigmoid and tanh functions.

Exploding gradients can be reduced by using the rectified linear (ReLU) activation function.

Adopting the ReLU activation function is a new best practice for hidden layers.

3. Use Long Short-Term Memory Networks

In recurrent neural networks, gradient exploding can occur given the inherent instability in the training of this type of network, e.g. via Backpropagation through time that essentially transforms the recurrent network into a deep multilayer Perceptron neural network.

Exploding gradients can be reduced by using the Long Short-Term Memory (LSTM) memory units and perhaps related gated-type neuron structures.

Adopting LSTM memory units is a new best practice for recurrent neural networks for sequence prediction.

4. Use Gradient Clipping

Exploding gradients can still occur in very deep Multilayer Perceptron networks with a large batch size and LSTMs with very long input sequence lengths.

If exploding gradients are still occurring, you can check for and limit the size of gradients during the training of your network.

This is called gradient clipping.

Dealing with the exploding gradients has a simple but very effective solution: clipping gradients if their norm exceeds a given threshold.

— Section 5.2.4, Vanishing and Exploding Gradients, Neural Network Methods in Natural Language Processing, 2017.

Specifically, the values of the error gradient are checked against a threshold value and clipped or set to that threshold value if the error gradient exceeds the threshold.

To some extent, the exploding gradient problem can be mitigated by gradient clipping (thresholding the values of the gradients before performing a gradient descent step).

— Page 294, Deep Learning, 2016.

In the Keras deep learning library, you can use gradient clipping by setting the clipnorm or clipvalue arguments on your optimizer before training.

Good default values are clipnorm=1.0 and clipvalue=0.5.

5. Use Weight Regularization

Another approach, if exploding gradients are still occurring, is to check the size of network weights and apply a penalty to the networks loss function for large weight values.

This is called weight regularization and often an L1 (absolute weights) or an L2 (squared weights) penalty can be used.

Using an L1 or L2 penalty on the recurrent weights can help with exploding gradients

— On the difficulty of training recurrent neural networks, 2013.

In the Keras deep learning library, you can use weight regularization by setting the kernel_regularizer argument on your layer and using an L1 or L2 regularizer.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Books

Papers

Articles

Keras API

Summary

In this post, you discovered the problem of exploding gradients when training deep neural network models.

Specifically, you learned:

  • What exploding gradients are and the problems they cause during training.
  • How to know whether you may have exploding gradients with your network model.
  • How you can fix the exploding gradient problem with your network.

Source: Machinelearningmastery

 

If you’re interested in a career in Deep Learning call us at Hanson Regan on  +44 0208 290 4656

 

Beijing dominates China’s artificial intelligence landscape

Beijing dominates China’s artificial intelligence landscape

Zhong Guan Cun, the city's vast technology hub, has become the AI innovation highland of the country.

Zhong Guan Cun, the city's vast technology hub, has become the AI innovation highland of the country

There are 1,070 artificial intelligence companies in Beijing, accounting for 26% of the total number in China, according to the AI Development White Paper published by the Beijing Municipal Commission of Economy and Information Technology, The Paper reported.

As of May 8, the number of AI enterprises in China hit 4,040, while those with venture capital reached 1,237 — 35% of them based in Beijing.

Zhong Guan Cun, the technology hub in Beijing, has become the AI innovation highland of the country. But more than half of Beijing’s AI firms are still in the initial stage.

At least 29% of them are in A-round, followed by 6.7% in Pre-A round. About 18.5% and 2.7% have received funding from angel investors and seed investors, respectively.

Though Beijing is equipped with academic resources and a strong talent pool, the development of the AI industry still face problems, such as the lack of original innovation capacity when compared to US counterparts.

Also, the lack of high-end chips, key components and high-precision sensors may post a great challenge in the future to the development of the sector, the report says.

Source: ATimes

If you’re interested in a career in Artificial Intelligence call us at Hanson Regan on  +44 0208 290 4656

Combating hunger with artificial intelligence

Combating hunger with artificial intelligence

In order to improve world food conditions, a team around computer science professor Kristian Kersting was inspired by the technology behind Google News

In order to improve world food conditions, a team around computer science professor Kristian Kersting was inspired by the technology behind Google News.

Almost 800 million people worldwide suffer from malnutrition. In the future there could be around 9.7 billion people—around 2.2 billion more than today. Global demand for food will increase as climate change leaves more and more soil infertile. How should future generations feed themselves?

Kristian Kersting, Professor of Machine Learning at the Technische Universität Darmstadt, and his team see a potential solution in the application of artificial intelligence (AI). Machine learning, a special method of AI, could be the basis for so-called precision farming, which could be used to achieve higher yields on areas of equal or smaller size. The project is funded by the Federal Ministry of Food and Agriculture. Partners are the Institute of Crop Science and Resource Conservation (INRES) at the University of Bonn and the Aachen-based company Lemnatec.

"First of all, we want to understand what physiological processes in plants look like when they suffer from stress," said Kersting. "Stress occurs, for example, when plants do not absorb enough water or are infected with pathogens. Machine learning can help us to analyse these processes more precisely." This knowledge could be used to cultivate more resistant plants and to combat diseases more efficiently.

The researchers installed a hyperspectral camera that records a broadwave spectrum and provides deep insights into the plants. The more data available on the physiological processes of a plant during its growth cycle, the better a software is able to identify recurring patterns that are responsible for stress. However, too much data can be a problem, as the calculations become too complex. The researchers therefore need algorithms that use only part of the data for learning without sacrificing accuracy.

Kersting's team found a clever solution: To evaluate the data, the team used a highly advanced learning process from language processing, which is used, for example, at Google News. There, an AI selects the relevant articles for the reader from tens of thousands of new articles every day and sorts them by topic. This is done using probability models in which all words of a text are assigned to a specific topic. Kersting's trick was to treat the hyperspectral images of the camera like words: The software assigns certain image patterns to a topic such as the stress state of the plant.

The researchers are currently working on teaching the software to optimise itself using deep learning and to find the patterns that represent stress more quickly. "A healthy spot can for instance be identified from the chlorophyll content in the growth process of the plant," said Kersting. "When a drying process occurs, the measured spectrum changes significantly." The advantage of machine learning is that it can recognise such signs earlier than a human expert, as the software learns to pay attention to more subtleties.

It is hoped that someday, cameras can be installed along rows of plants on an assembly line in the greenhouse, allowing the software to point out abnormalities at any time. Through a constant exchange with plant experts, the system should also learn to identify even unknown pathogens. "Ultimately, our goal is a meaningful partnership between human and artificial intelligence, in order to address the growing problem of world nutrition," says Kersting.

Source: phys.org

If you’re interested in a career in Artificial Intelligence call us at Hanson Regan on  +44 0208 290 4656

Artificial intelligence footstep recognition system could be used for airport security

Artificial intelligence footstep recognition system could be used for airport security

The way you walk and your footsteps could be used as a biometric at airport security instead of fingerprinting and eye-scanning.

The way you walk and your footsteps could be used as a biometric at airport security instead of fingerprinting and eye-scanning.

Researchers at The University of Manchester in collaboration with the Universidad Autónoma de Madrid, Spain, have developed a state-of-the-art artificial intelligence (AI), biometric verification system that can measure a human’s individual gait or walking pattern. It can successfully verify an individual simply by them walking on a pressure pad in the floor and analysing the footstep 3D and time-based data.

The results, published in one of the top machine learning research journalsthe IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) earlier this year, showed that, on average, the AI system developed correctly identified an individual almost 100% of the time, with just a 0.7 error rate. 

Physical biometrics, such as fingerprints, facial recognition and retinal scans, are currently more commonly used for security purposes. However, so-called behavioural biometrics, such as gait recognition, also capture unique signatures delivered by a person’s natural behavioural and movement patterns. The team tested their data by using a large number of so-called ‘impostors’ and a small number of users in three different real-world security scenarios. These were airport security checkpoints, the workplace, and the home environment. 

Omar Costilla Reyes, from Manchester’s School of School of Electrical and Electronic Engineering, explains: “Each human has approximately 24 different factors and movements when walking, resulting in every individual person having a unique, singular walking pattern. Therefore monitoring these movements can be used, like a fingerprint or retinal scan, to recognise and clearly identify or verify an individual.”

To create the AI system that computers need to learn such movements patterns, the team used SfootBD, the largest footstep database in history (to date), containing nearly 20,000 footstep signals from 127 different individuals.

Omar added: “Focussing on non-intrusive gait recognition by monitoring the force exerted on the floor during a footstep is very challenging. That’s because distinguishing between the subtle variations from person to person is extremely difficult to define manually, that is why we had to come up with a novel AI system to solve this challenge from a new perspective.”

One the key benefits of using footprint recognition is, unlike being filmed or scanned at an airport, the process is non-intrusive for the individual and resilient to noise environmental conditions. The person doesn’t even need to remove their footwear when walking on the pressure pads as it isn’t based on the footprint shape itself but with their gait.

Other applications for the technology include smart steps that could recognise neuro degeneration which could have positive implications in the healthcare sector. This is another area that Omar intends to advance his research with footstep recognition.

He added: “The research is also being developed to address the healthcare problem of markers for cognitive decline and onset of mental illness, by using raw footstep data from a wide-area floor sensor deployable in smart dwellings. Human movement can be a novel biomarker of cognitive decline, which can be explored like never before with novel AI systems”

The research was also selected for the University’s Faculty of Science and Engineering (FSE) "in-abstract" a compendium of the very best new research coming from FSE.

Source: Manchester.ac.uk

If you’re interested in a career in Artificial Intelligence call us at Hanson Regan on  +44 0208 290 4656

Why Quantum Computing Should Be on Your Radar Now

Why Quantum Computing Should Be on Your Radar Now

Boston Consulting Group and Forrester are advising clients to get smart about quantum computing and start experimenting now so they can separate hype from reality.

Boston Consulting Group and Forrester are advising clients to get smart about quantum computing and start experimenting now so they can separate hype from reality.

There's a lot of chatter about quantum computing, some of which is false and some of which is true. For example, there's a misconception that quantum computers are going to replace classical computers for every possible use case, which is false. "Quantum computing" is not synonymous with "quantum leap," necessarily. Instead, quantum computing involves quantum physics which makes it fundamentally different than classical, binary computers. Binary computers can only process 1s and 0s. Quantum computers can process many more possibilities, simultaneously.

If math and physics scare you, a simple analogy (albeit not an entirely correct analogy) involves a light switch and a dimmer switch that represent a classical computer and a quantum computer, respectively. The standard light switch has two states: on and off. The dimmer switch provides many more options, including on, off, and range of states between on and off that are experienced as degrees of brightness and darkness. With a dimmer switch, a light bulb can be on, off, or a combination of both.

If math and physics do not scare you, quantum computing involves quantum superposition, which explains the nuances more eloquently.

One reason quantum computers are not an absolute replacement for classical computers has to do with their physical requirements. Quantum computers require extremely cold conditions in order for quantum bits or qubits to remain "coherent." For example, much of D-Wave's Input/Output (I/O) system must function at 15 millikelvin (mK), which is near absolute zero. 15 mK is equivalent to minus 273.135 degrees Celsius or minus 459.643 degrees Fahrenheit. By comparison, the classical computers most individuals own have built-in fans, and they may include heat sinks to dissipate heat. Supercomputers tend to be cooled with circulated water. In other words, the ambient operating environments required by quantum computers and classical computers vary greatly. Naturally, there are efforts that are aimed at achieving quantum coherence in room temperature conditions, one of which is described here.

Quantum computers and classical computers are fundamentally different tools. In a recent report, Brian Hopkins, vice president and principal analyst at Forresterexplained, "Quantum computing is a class of emerging hardware and software that exploits subatomic phenomenon to solve computationally hard problems."

What to expect, when

There's a lot of confusion about the current state of quantum computing which industry research firms Boston Consulting Group (BCG) and Forrester are attempting to clarify.

In the Forrester report, Hopkins estimates that quantum computing is in the early stages of commercialization, a stage that will persist through 2025 to 2030. The growth stage will begin at the end of that period and continue through the end of the forecast period which is 2050.

A recent BCG report estimates that quantum computing will become a $263 to $295 billion-dollar market given two different forecasting scenarios, both of which span 2025 to 2050. BCG also reasons that the quantum computing market will advance in three distinct phases:

  1. The first generation will be specific to applications that are quantum in nature, similar to what D-Wave is doing.
  2. The second generation will unlock what report co-author and BCG senior partner Massimo Russo calls "more interesting use cases."
  3. In the third generation, quantum computers will have achieved the number of logical qubits required to achieve Quantum Supremacy. (Note: Quantum Supremacy and logical qubits versus physical qubits are important concepts addressed below.)

"If you consider the number of logical qubits [required for problem-solving], it's going to take a while to figure out what use cases we haven't identified yet," said BCG's Russo. "Molecular simulation is closer. Pharma company interest is higher than in other industries."

Life sciences, developing new materials, manufacturing, and some logistics problems are ideal for quantum computers for a couple of possible reasons:

  • A quantum machine is more adept at solving quantum mechanics problems than classical computers, even when classical computers are able to simulate quantum computers
  • The nature of the problem is so difficult that it can't be solved using classical computers at all, or it can't be solved using classical computers within a reasonable amount of time, at a reasonable cost.

There are also hybrid use cases in which parts of a problem are best solved by classical computers and other parts of the problem are best solved by quantum computers. In this scenario, the classical computer breaks the problem apart, communicates with the quantum computer via an API, receives the result(s) from the quantum computer and then assembles a final answer to the problem, according to BCG's Russo.

"Think of it as a coprocessor that will address problems in a quantum way," he said.

While there is a flurry of quantum computing announcements at present, practically speaking, it may take a decade to see the commercial fruits of some efforts and multiple decades to realize the value of others.

Logical versus physical qubits

All qubits are not equal, which is true in two regards. First, there's an important difference between logical qubits and physical qubits. Second, the large vendors are approaching quantum computing differently, so their "qubits" may differ.

When people talk about quantum computers or semiconductors that have X number of qubits, they're referring to physical qubits. The reason the number of qubits matters is that the computational power grows exponentially with the addition of each, individual qubit. According to  Microsoft, a calculator is more powerful than a single qubit, and "simulating a 50-qubit quantum computation would arguably push the limits of existing supercomputers."

BCG's Russo said for semiconductors, the number of physical qubits required to create a logical qubit can be as high as 3,000:1. Forrester's Hopkins stated he's heard numbers ranging from 10,000 to 1 million or more, generally.

"No one's really sure," said Hopkins. "Microsoft thinks [it's] going to be able to achieve a 5X reduction in the number of physical qubits it takes to produce a logical qubit."  

The difference between physical qubits and logical qubits is extremely important because physical qubits are so unstable they need the additional qubits to ensure error correction and fault tolerance.

Get a grip on Quantum Supremacy

Quantum Supremacy does not signal the death of classical computers for the reasons stated above. Google cites this definition: "A critical question for the field of quantum computing in the near future is whether quantum devices without error correction can perform a well-defined computational task beyond the capabilities of state-of-the-art classical computers, achieving so-called quantum supremacy."

"We're not going to achieve Quantum Supremacy overnight, and we're not going to achieve it across the board," said Forrester's Hopkins. "Supremacy is a stepping stone to delivering a solution. Quantum Supremacy is going to be achieved domain by domain, so we're going to achieve Quantum Supremacy, which Google is advancing, and then Quantum Value, which IBM is advancing, in quantum chemistry or molecular simulation or portfolio risk management or financial arbitrage."

The fallacy is believing that Quantum Supremacy means that quantum computers will be better at solving all problems, ergo classical computers are doomed.

Given the proper definition of the term, Google is attempting to achieve Quantum Supremacy with its 72-qubit quantum processor, Bristlecone.

How to get started now

First, understand the fundamental differences between quantum computers and classical computers. This article is merely introductory, given its length.

Next, (before, after and simultaneously with the next piece of advice) find out what others are attempting to do with quantum computers and quantum simulations and consider what use cases might apply to your organization. Do not limit your thinking to what others are doing. Based on a fundamental understanding of quantum computing and your company's business domain, imagine what might be possible, whether the end result might be a minor percentage optimization that would give your company a competitive advantage or a disruptive innovation such as a new material.

Experimentation is also critical, not only to test hypotheses, but also to better understand how quantum computing actually works. The experimentation may inspire new ideas, and it will help refine existing ideas. From a business standpoint, don't forget to consider the potential value that might result from your work.

Meanwhile, if you want to get hands-on experience with a real quantum computer, try IBM Q. The "IBM Q Experience" includes user guides, interactive demos, the Quantum Composer which enables the creation of algorithms that run on real quantum computing hardware, and the QISKit software developer kit.

Also check out Quantum Computing Playground which is a browser-based WebGL Chrome experiment that features a GPU-accelerated quantum computer with a simple IDE interface, its own scripting language with debugging and 3D quantum state visualization features.

In addition, the Microsoft Quantum Development Kit Preview is available now. It includes the Q# language and compiler, the Q# standard library, a local quantum machine simulator, a trace quantum simulator which estimates the resources required to run a quantum program and Visual Studio extension.

Source: Informationweek

The software robot invasion is underway

The software robot invasion is underway

 Companies are adopting robotic process automation tools as they look to reduce errors and increase process efficiency.

One of the more disruptive emerging technologies, robotic process automation (RPA), appears primed for significant growth, despite the fact that many organizations remain confused or concerned about the impact these tools might have on their operations.

For some, RPA is seen as a technology designed to replace full-time human labor outright and therefore to be treated with caution. For others, it has the potential for huge cost savings and can enable enterprises to move people from mundane tasks such as data entry to more exciting endeavors.

Recent research indicates that there's a growing demand for RPA, which involves the use of software robots to handle any rules-based repetitive tasks quickly and cost effectively. And deploying the technology doesn't have to result in throwing a lot of people out of work.

"Interest and adoption of RPA has spiked dramatically across [the largest] organizations," said Tony Abel, a managing director with consulting firm Protiviti. "Organizations that have been dabbling in trials of other AI [artificial intelligence] technologies are realizing that to complete the vision of their digital transformation, they need to include an AI component that addresses their operational challenges."

Most organizations that have deployed RPA are looking to reduce errors and processing times and to integrate across expansive technology platforms, Abel said. "They're also looking to improve controls that both accelerate the existing audit process and anticipate greater complexity in audit processing in the future," he said.

Companies that are truly leveraging the value of RPA are doing so in such a way that improves their human capital position by replacing or enhancing activities currently performed by humans with robots, Abel said. Others are still reluctant to recognize the direct correlation between what a robot can do and what has historically been done by humans, he said. They are therefore hesitant to invest in the technology.

Clearly, these are still the early days of RPA implementation.

"Many organizations are still just getting started," Abel said. "They begin with a specific use case, usually by applying proof-of-concept bots in one small area of the business, whether that's supplier setup, system access provisioning, or invoice reconciliation."

Once they realize the value, they then look across the enterprise to other business processes that could reap the benefits of automation, Abel said.

"Another trend we are seeing is the use of robotics in delivery of services, particularly outsourced services," Abel said. "Also, the continual increase in labor rates in major off-shore locations is driving substitution of human labor for automation."

There are no guarantees of success. "We've seen a number of organizations that have stumbled with RPA implementations," Abel said. This usually occurs in large enterprises that are highly bureaucratic, he said.

Often several areas within an organization are running trials of one or multiple RPA products without fully committing or appropriately dedicating the time and skills necessary. "They are also not talking with one another," Abel said. "They have approached it with one foot out the door and become disillusioned with the results."

Disillusionment also comes when organizations are not able to reduce as much human capital as they had hoped. "Their business cases [and return on investment] was based purely on reducing headcount, which is a narrow way to view the value RPA can provide," Abel said. "The issues organizations are facing are a consequence of not having proper guidance and leadership in their RPA journey."

Source: ZDnet

Cyber security: Machine learning to be the main focus in 2018

Cyber security: Machine learning to be the main focus in 2018

Identified as one of this year’s biggest issues, machine learning has some very diverse applications in the world of cyber security.

In a landscape marked by an explosion in the number of security incidents, machine learning should be the main focal point in 2018. The promise of automated learning is of as much interest to hackers as it is to companies concerned with protecting their informational assets. The subject has even made it onto McAfee’s 2018 five most important trends in cyber security. 

Machine learning as a new battleground

Identified as one of this year’s biggest issues, machine learning has some very diverse applications in the world of cyber security. For example, it can be used to analyse the activities carried out by an authentication service so as to trigger an alert or block access when abnormal behaviour is detected. In this context, the system will study all the parameters of the attempted connection and seek to establish all the useful correlations that will allow it to decide whether it should, or shouldn’t, be authorised. Here, it’s the systems’ ability to collect and process large volumes of data in real time that gives the machine a form of intelligence.

On the other hand, attackers are not unaware of the benefits of this approach and are exploiting it by themselves to test the presence of vulnerabilities or to take their social engineering campaigns into business. Their work has given rise to new tools that can learn and adapt to exploit breaches more efficiently. We just need to wait and see which channels these attacks will take.

Other major trends in 2018

Marked by the wide-scale offensives such as WannaCry or BadRabbit, 2017 saw more than a 50% increase in the number of ransomware attacks. McAfee estimates that in 2018 hackers will likely carry out fewer but more targeted attacks, in order to maximise the chances of success. The market may, then, shift from a volume-based approach towards using more sophisticated tools and oriented towards the most lucrative victims. Smartphones will be amongst the new hottest targets.  

Particular attention should be paid to new applications being distributed by one or several Cloud providers following the “serverless” logic. This new way of using resources on demand is inducing new security risks: each new application used actually constitutes a new potential attack vector.

And for the last of these trends: the protection of private individuals faced with threats caused by the increase in personal data, particularly fostered by the wide accessibility of IoT. McAfee draws attention to two aspects of the phenomenon that need to be considered: firstly concerning all the deviations, particularly in marketing, that can come from exploiting this information by the manufacturers of the devices in question, despite the upcoming General Data Protection Regulation (GDPR).

As a corollary to the previous point, McAfee also underlines the often poorly-managed importance of consent given by the end-user of online services that involve personal data.

Of machines and men

Conclusion? Now more than ever, cyber security will be the concern of both machines and humans in 2018. Machines will have to learn how to come to terms with evermore sophisticated techniques of attack and defence. Humans, on the other hand, will have to learn how to manage how their information is used.

Source: Soprasteria

Quote of the Week

Quote of the Week

"Technology can be our best friend, and technology can also be the biggest party pooper of our lives. It interrupts our own story, interrupts our ability to have a thought or a daydream, to imagine something wonderful, because we're too busy bridging the walk from the cafeteria back to the office on the cell phone." 

Steven Spielberg
 

"Technology can be our best friend, and technology can also be the biggest party pooper of our lives. It interrupts our own story, interrupts our ability to have a thought or a daydream, to imagine something wonderful, because we're too busy bridging the walk from the cafeteria back to the office on the cell phone." 

Steven Spielberg

Artificial Intelligence will transform Universities

Artificial Intelligence will transform Universities

As AI surpasses human abilities in Go and poker – two decades after Deep Blue trounced chess grandmaster Garry Kasparov – it is seeping into our lives in ever more profound ways. It affects the way we search the web, receive medical advice and whether we receive finance from our banks.

Artificial Intelligence (AI) is a technology whose time has come.

As AI surpasses human abilities in Go and poker – two decades after Deep Blue trounced chess grandmaster Garry Kasparov – it is seeping into our lives in ever more profound ways. It affects the way we search the web, receive medical advice and whether we receive finance from our banks.

The most innovative AI breakthroughs, and the companies that promote them – such as DeepMind, Magic Pony, Aysadi, Wolfram Alpha and Improbable – have their origins in universities. Now AI will transform universities.

We believe AI is a new scientific infrastructure for research and learning that universities will need to embrace and lead, otherwise they will become increasingly irrelevant and eventually redundant.

Through their own brilliant discoveries, universities have sown the seeds of their own disruption. How they respond to this AI revolution will profoundly reshape science, innovation, education – and society itself.

Deep Mind was created by three scientists, two of whom met while working at University College London. Demis Hassabis, one of Deep Mind’s founders, who has a PhD in cognitive neuroscience from UCL and has undertaken postdoctoral studies at MIT and Harvard, is one of many scientists convinced that AI and machine learning will improve the process of scientific discovery.

It is already eight years since scientists at the University of Aberystwyth created a robotic system that carried out an entire scientific process on its own: formulating hypotheses, designing and running experiments, analysing data, and deciding which experiments to run next.

Complex data sets

Applied in science, AI can autonomously create hypotheses, find unanticipated connections, and reduce the cost of gaining insights and the ability to be predictive.

AI is being used by publishers such as Reed Elsevier for automating systematic academic literature reviews, and can be used for checking plagiarism and misuse of statistics. Machine learning can potentially flag unethical behaviour in research projects prior to their publication.

AI can combine ideas across scientific boundaries. There are strong academic pressures to deepen intelligence within particular fields of knowledge, and machine learning helps facilitate the collision of different ideas, joining the dots of problems that need collaboration between disciplines.

As AI gets more powerful, it will not only combine knowledge and data as instructed, but will search for combinations autonomously. It can also assist collaboration between universities and external parties, such as between medical research and clinical practice in the health sector.

The implications of AI for university research extend beyond science and technology.

Philosophical questions

In a world where so many activities and decisions that were once undertaken by people will be replaced or augmented by machines, profound philosophical questions arise about what it means to be human. Computing pioneer Douglas Engelbert – whose inventions include the mouse, windows and cross-file editing – saw this in 1962 when he wrote of “augmenting human intellect”.

Expertise in fields such as psychology and ethics will need to be applied to thinking about how people can more rewardingly work alongside intelligent machines and systems.

Research is needed into the consequences of AI on the levels and quality of employment and the implications, for example, for public policy and management.

When it comes to AI in teaching and learning, many of the more routine academic tasks (and least rewarding for lecturers), such as grading assignments, can be automated. Chatbots, intelligent agents using natural language, are being developed by universities such as the Technical University of Berlin; these will answer questions from students to help plan their course of studies.

Virtual assistants can tutor and guide more personalized learning. As part of its Open Learning Initiative (OLI), Carnegie Mellon University has been working on AI-based cognitive tutors for a number of years. It found that its OLI statistics course, run with minimal instructor contact, resulted in comparable learning outcomes for students with fewer hours of study. In one course at the Georgia Institute of Technology, students could not tell the difference between feedback from a human being and a bot.

Global classroom

Mixed reality and computer vision can provide a high-fidelity, immersive environment to stimulate interest and understanding. Simulations and games technology encourage student engagement and enhance learning in ways that are more intuitive and adaptive. They can also engage students in co-developing knowledge, involving them more in university research activities. The technologies also allow people outside of the university and from across the globe to participate in scientific discovery through global classrooms and participative projects such as Galaxy Zoo.

As well as improving the quality of education, AI can make courses available to many more people. Previously access to education was limited by the size of the classroom. With developments such as Massive Open Online Courses (MOOCs) over the last five years, tens of thousands of people can learn about a wide range of university subjects.

It still remains the case, however, that much advanced learning, and its assessment, requires personal and subjective attention that cannot be automated. Technology has ‘flipped the classroom’, forcing universities to think about where we can add real value – such as personalised tuition, and more time with hands-on research, rather than traditional lectures.

Monitoring performance

University administrative processes will benefit from utilising AI on the vast amounts of data they produce during their research and teaching activities. This can be used to monitor performance against their missions, be it in research, education or promotion of diversity, and can be produced frequently to assist more responsive management. It can enhance the quality of performance league tables, which are often based on data with substantial time lags. It can allow faster and more efficient applicant selection.

AI allows the tracking of individual student performance, and universities such as Georgia State and Arizona State are using it to predict marks and indicate when interventions are needed to allow students to reach their full potential and prevent them from dropping out.

Such data analytics of students and staff raises weighty questions about how to respect privacy and confidentiality, that require judicious codes of practice.

The blockchain is being used to record grades and qualifications of students and staff in an immediately available and incorruptible format, helping prevent unethical behaviour, and could be combined with AI to provide new insights into student and career progression.

Universities will need to be attuned to the new opportunities AI produces for supporting multidisciplinarity. In research this will require creating new academic departments and jobs, with particular demands for data scientists. Curricula will need to be responsive, educating the scientists and technologists who are creating and using AI, and preparing students in fields as diverse as medicine, accounting, law and architecture, whose future work and careers will depend on how successfully they ally their skills with the capabilities of machines.

New curricula should allow for the unpredictable path of AI’s development, and should be based on deep understanding, not on the immediate demands of companies.

Addressing the consequences

Universities are the drivers of disruptive technological change, like AI and automation. It is the duty of universities to reflect on their broader social role, and create opportunities that will make society resilient to this disruption.

We must address the consequences of technological unemployment, and universities can help provide skills and opportunities for people whose jobs have been adversely affected.

There is stiff competition for people skilled in the development and use of AI, and universities see many of their talented staff attracted to work in the private sector. One of the most pressing AI challenges for universities is the need for them to develop better employment conditions and career opportunities to retain and incentivize their own AI workers. They need to create workplaces that are flexible, agile and responsive to interactions with external sources of ideas, and are open to the mixing of careers as people move between universities and business.

The fourth industrial revolution is profoundly affecting all elements of contemporary societies and economies. Unlike the previous revolutions, where the structure and organization of universities were relatively unaffected, the combinations of technologies in AI is likely to shake them to their core. The very concept of ‘deep learning’, central to progress in AI, clearly impinges on the purpose of universities, and may create new competition for them.

If done right, AI can augment and empower what universities already do; but continuing their missions of research, teaching and external engagement will require fundamental reassessment and transformation. Are universities up to the task?

Source: Weforum

Deep Sea exploring with Lasers & big data

Deep Sea exploring with Lasers & big data

Advances in computing power and smart data tools are allowing scientists to build amazing high-resolution maps of the ocean floor

Advances in computing power and smart data tools are allowing scientists to build amazing high-resolution maps of the ocean floor

We currently know more about the surface of Mars than we do about our planet’s ocean floor. This seems even more ridiculous when you consider that the oceans cover 71 percent of the Earth. They also play a vital role in providing food and fresh air (ocean plants produce half of the world's oxygen), as well as shaping our weather and climate.

Of course, “most people think the bottom of the ocean is like a giant bathtub filled with mud — boring, flat and dark,” said oceanographer Robert D. Ballard1, the man who discovered the wreck of the Titanic. “But it contains the largest mountain range on earth, canyons far grander than the Grand Canyon and towering vertical cliffs rising up three miles — more than twice the height of Yosemite’s celebrated El Capitan.”

Which begs the question: what else lies hidden in the deep?

With only 5 percent of the ocean floor mapped in any real detail, there’s undoubtedly much more to discover. But it’s a mammoth task. The Seabed 2030 project will spend the next 13 years systematically depth-logging 140 million square miles of ocean, with the goal of leaving no feature larger than 100 metres unmapped.

Even today, there’s no shortage of bottom topography (aka bathymetry) data. Scientists can draw information from ships, ROVs, buoys and satellites, with measurements taken using a combination of multibeam sonar, Lidar and laser altimetry.

The challenge isn’t gathering the data, it’s making sense of it all.

 

 

   

Enter big data analytics. Advances in computing power and smart data tools are allowing scientists to build high-resolution maps from a variety of different sources. For example, National ICT Australia (NICTA) and the University of Sydney used big data analytics and AI to convert 15,000 seafloor sediment samples into a unique digital map2.

The Black Sea Maritime Archaeological Project (MAP), meanwhile, might have started laser-mapping the 168,500 square mile inland sea to study the effects of climate change3. But the data ultimately revealed over 60 undiscovered shipwrecks spanning 2,500 years of maritime history. Finds included vessels from the Roman, Byzantine and Ottoman periods.

Big data analytics isn’t just helping to map the world’s oceans. It’s becoming instrumental in how we monitor and protect them. The technology is already being used to regulate fishing and to provide real-time data for optimising ship routes. It can also be used to track water temperature and flow to predict extreme weather events based on historical simulations.

“Exploration and mapping, and making the data open source, would be for the betterment of all citizens,” Ballard told The Smithsonian Magazine1. “Not just in economic terms but in opportunities for unexpected discoveries.”

Big data analytics has the potential to see patterns in vast reams of data, crunching the numbers to provide analysis and insight. Armed with this information, we can better understand our oceans, sustain and protect them. In doing so, we can have a positive effect on the overall health of our planet.

Source: Intel

What is Machine Learning?

What is Machine Learning?

Typing “what is machine learning?” into a Google search opens up a pandora’s box of forums, academic research, and here-say – and the purpose of this article is to simplify the definition and understanding of machine learning thanks to the direct help from our panel of machine learning researchers.

Typing “what is machine learning?” into a Google search opens up a pandora’s box of forums, academic research, and here-say – and the purpose of this article is to simplify the definition and understanding of machine learning thanks to the direct help from our panel of machine learning researchers.

In addition to an informed, working definition of machine learning (ML), we aim to provide a succinct overview of the fundamentals of machine learning, the challenges and limitations of getting machine to ‘think’, some of the issues being tackled today in deep learning (the ‘frontier’ of machine learning), and key takeaways for developing machine learning applications.

This article will be broken up into the following sections:

  • What is machine learning?
  • How we arrived at our definition (IE: the perspective of expert researchers)
  • Machine learning basic concepts
  • Visual representation of ML models
  • How we get machines to learn
  • An overview of the challenges and limitations of ML
  • Brief introduction to deep learning

We put together this resource to help with whatever your area of curiosity about machine learning – so scroll along to your section of interest, or feel free to read the article in order, starting with our machine learning definition below:

What is Machine Learning?

* “Machine Learning is the science of getting computers to learn and act like humans do, and improve their learning over time in autonomous fashion, by feeding them data and information in the form of observations and real-world interactions.”

The above definition encapsulates the ideal objective or ultimate aim of machine learning, as expressed by many researchers in the field. The purpose of this article is to provide a business-minded reader with expert perspective on how machine learning is defined, and how it works. References and related researcher interviews are included at the end of this article for further digging.

* How We Arrived at Our Definition:

(Our aggregate machine learning definition can be found at the beginning of this article)

As with any concept, machine learning may have a slightly different definition, depending on whom you ask. We combed the Internet to find five practical definitions from reputable sources:

  1. “Machine Learning at its most basic is the practice of using algorithms to parse data, learn from it, and then make a determination or prediction about something in the world.” – Nvidia 
  2. “Machine learning is the science of getting computers to act without being explicitly programmed.” – Stanford
  3. “Machine learning is based on algorithms that can learn from data without relying on rules-based programming.”- McKinsey & Co.
  4. “Machine learning algorithms can figure out how to perform important tasks by generalizing from examples.” – University of Washington
  5. “The field of Machine Learning seeks to answer the question “How can we build computer systems that automatically improve with experience, and what are the fundamental laws that govern all learning processes?” – Carnegie Mellon University

We sent these definitions to experts whom we’ve interviewed and/or included in one of our past research consensuses, and asked them to respond with their favorite definition or to provide their own. Our introductory definition is meant to reflect the varied responses. Below are some of their responses:

Dr. Yoshua Bengio, Université de Montréal:

ML should not be defined by negatives (thus ruling 2 and 3). Here is my definition:

Machine learning research is part of research on artificial intelligence, seeking to provide knowledge to computers through data, observations and interacting with the world. That acquired knowledge allows computers to correctly generalize to new settings.

Dr. Danko Nikolic, CSC and Max-Planck Institute:

(edit of number 2 above): “Machine learning is the science of getting computers to act without being explicitly programmed, but instead letting them learn a few tricks on their own.”

Dr. Roman Yampolskiy, University of Louisville:

Machine Learning is the science of getting computers to learn as well as humans do or better.

Dr. Emily Fox, University of Washington: 

My favorite definition is #5.

Machine Learning Basic Concepts

There are many different types of machine learning algorithms, with hundreds published each day, and they’re typically grouped by either learning style (i.e. supervised learning, unsupervised learning, semi-supervised learning) or by similarity in form or function (i.e. classification, regression, decision tree, clustering, deep learning, etc.). Regardless of learning style or function, all combinations of machine learning algorithms consist of the following:

  • Representation (a set of classifiers or the language that a computer understands)
  • Evaluation (aka objective/scoring function)
  • Optimization (search method; often the highest-scoring classifier, for example; there are both off-the-shelf and custom optimization methods used)


Image credit: Dr. Pedro Domingo, University of Washington

The fundamental goal of machine learning algorithms is to generalize beyond the training samples i.e. successfully interpret data that it has never ‘seen’ before.

Visual Representations of Machine Learning Models

Concepts and bullet points can only take one so far in understanding. When people ask “What is machine learning?”, they often want to see what it is and what it does. Below are some visual representations of machine learning models, with accompanying links for further information. Even more resources can be found at the bottom of this article.
How We Get Machines to Learn

There are different approaches to getting machines to learn, from using basic decision trees to clustering to layers of artificial neural networks (the latter of which has given way to deep learning), depending on what task you’re trying to accomplish and the type and amount of data that you have available.

While emphasis is often placed on choosing the best learning algorithm, researchers have found that some of the most interesting questions arise out of none of the available machine learning algorithms performing to par. Most of the time this is a problem with training data, but this also occurs when working with machine learning in new domains.

Research done when working on real applications often drives progress in the field, and reasons are twofold: 1. Tendency to discover boundaries and limitations of existing methods 2. Researchers and developers working with domain experts and leveraging time and expertise to improve system performance.

Sometimes this also occurs by “accident.” We might consider model ensembles, or combinations of many learning algorithms to improve accuracy, to be one example. Teams competing for the 2009 Netflix Price found that they got their best results when combining their learners with other team’s learners, resulting in an improved recommendation algorithm (read Netflix’s blog for more on why they didn’t end up using this ensemble).

One important point (based on interviews and conversations with experts in the field), in terms of application within business and elsewhere, is that machine learning is not just, or even about, automation, an often misunderstood concept. If you think this way, you’re bound to miss the valuable insights that machines can provide and the resulting opportunities (rethinking an entire business model, for example, as has been in industries like manufacturing and agriculture).

Machines that learn are useful to humans because, with all of their processing power, they’re able to more quickly highlight or find patterns in big (or other) data that would have otherwise been missed by human beings. Machine learning is a tool that can be used to enhance humans’ abilities to solve problems and make informed inferences on a wide range of problems, from helping diagnose diseases to coming up with solutions for global climate change.

Challenges and Limitations

“Machine learning can’t get something from nothing…what it does is get more from less.” – Dr. Pedro Domingo, University of Washington

The two biggest, historical (and ongoing) problems in machine learning have involved overfitting (in which the model exhibits bias towards the training data and does not generalize to new data, and/or variance i.e. learns random things when trained on new data) and dimensionality (algorithms with more features work in higher/multiple dimensions, making understanding the data more difficult). Having access to a large enough data set has in some cases also been a primary problem.

One of the most common mistakes among machine learning beginners is testing training data successfully and having the illusion of success; Domingo (and others) emphasize the importance of keeping some of the data set separate when testing models, and only using that reserved data to test a chosen model, followed by learning learning on the whole data set.

When a learning algorithm (i.e. learner) is not working, often the quicker path to success is to feed the machine more data, the availability of which is by now well-known as a primary driver of progress in machine and deep learning algorithms in recent years; however, this can lead to issues with scalability, in which we have more data but time to learn that data remains an issue.

In terms of purpose, machine learning is not an end or a solution in and of itself. Furthermore, attempting to use it as a blanket solution i.e. “BLANK” is not a useful exercise; instead, coming to the table with a problem or objective is often best driven by a more specific question – “BLANK”.

Deep Learning and Modern Developments in Neural Networks

Deep learning involves the study and design of machine algorithms for learning good representation of data at multiple levels of abstraction (ways of arranging computer systems). Recent publicity of deep learning through DeepMind, Facebook, and other institutions has highlighted it as the “next frontier” of machine learning.

The International Conference on Machine Learning (ICML) is widely regarded as one of the most important in the world. This year’s took place in June in New York City, and it brought together researchers from all over the world who are working on addressing the current challenges in deep learning:

  1. Unsupervised learning in small data sets
  2. Simulation-based learning and transferability to the real world

Deep-learning systems have made great gains over the past decade in domains like bject detection and recognition, text-to-speech, information retrieval and others. Research is now focused on developing data-efficient machine learning i.e. deep learning systems that can learn more efficiently, with the same performance in less time and with less data, in cutting-edge domains like personalized healthcare, robot reinforcement learning, sentiment analysis, and others.

Key Takeaways in Applying Machine Learning

Below is a selection of best-practices and concepts of applying machine learning that we’ve collated from our interviews for out podcast series, and from select sources cited at the end of this article. We hope that some of these principles will clarify how ML is used, and how to avoid some of the common pitfalls that companies and researchers might be vulnerable to in starting off on an ML-related project.

  • Arguably the most important factor in successful machine learning projects is the features used to describe the data (which are domain-specific), and having adequate data to train your models in the first place
  • Most of the time when algorithms don’t perform well, it’s due a to a problem with the training data (i.e. insufficient amounts/skewed data; noisy data; or insufficient features describing the data for making decisions
  • “Simplicity does not imply accuracy” – there is (according to Domingo) no given connection between number of parameters of a model and tendency to overfit
  • Obtaining experimental data (as opposed to observational data, over which we have no control) should be done if possible (for example, data gleaned from sending different variations of an email to a random audience sampling)
  • Whether or not we label data causal or correlative, the more important point is to predict the effects of our actions 
  • Always set aside a portion of your training data set for cross validation; you want your chosen classifier or learning algorithm to perform well on fresh data

Source: Techemergence

The Digital Twin effect: Four ways it can revitalise your business

The Digital Twin effect: Four ways it can revitalise your business

Enterprises across the globe are embracing digital twins to revitalize their businesses. By 2021, half the world’s large industrial companies will rely on this innovative technology to gain additional insight around their products, assets, processes, operations, and more.

Enterprises across the globe are embracing digital twins to revitalize their businesses. By 2021, half the world’s large industrial companies will rely on this innovative technology to gain additional insight around their products, assets, processes, operations, and more.

Here are four specific ways digital twins can benefit your enterprise:

Enable data-driven decision making

Creating a digital twin involves building a comprehensive digital representation of the many components of a physical object, from outer features to the software inside. Companies develop digital twins by attaching Internet of Things (IoT) sensors to their products, assets, or equipment.

Building digital twins will give you digitalized versions of bills of materials, 2D drawings, and 3D models. More importantly, you’ll have an accurate view of how your devices are operating in real time.

This data empowers you to make better decisions. If your manufacturing equipment is lagging, you can fix or upgrade the machinery before it impacts your company’s efficiency. If a product is under performing, you can make improvements so future releases don’t have similar issues.

Automate business processes

On top of providing greater connectivity between your company and its products, digital twins help your enterprise better connect with its business processes.

Real-time data makes it possible for you to spot and put an end to business-process inefficiencies. But combining real-time data with historical data and machine learning capabilities in a digital twin allows you to predict problems and automatically resolve them.

To the naked eye, an asset may be operating as expected. Inside the machine, however, it’s another story. A glitch in the system is causing your asset to gradually slow down. Five days from now, it’ll fail completely.

Without the right technology, you’d never know that. But digital twins help you anticipate issues and prevent problems before they even occur. They enable you to detect anomalies and automate repair processes at the first sign of weakness. And by coming to your asset’s rescue sooner rather than later, you can avoid serious service interruption or prolonged downtime.

Increase collaboration

IoT keeps data flowing. Digital twins allow you to access this wealth of data in real time. But you don’t have to keep all that data to yourself. In fact, you'd be wise to share it.

Creating a digital twin network makes it easy to share data with internal colleagues, external supply chain partners, and even customers. With access to the same insight, you, your partners, and your customers can collaboratively improve products, processes, and more.

Sharing digital twin data with multiple internal departments ensures everyone’s always on the same page. Your R&D, finance, marketing, and sales teams – groups that typically work in silos – can collaborate to ensure your new product is properly designed, accurately priced, sufficiently promoted, and commercially viable.

Supply chain partners benefit from a network of digital twins with enhanced visibility. If an asset malfunctions, your maintenance provider knows it needs to mobilize a team to fix the equipment. If your company manufactures a product ahead of schedule, your logistics provider knows it can pick up the goods and deliver them early.

Finally, digital twin networks help you glean invaluable insight from your customers. By monitoring how customers interact with your goods, you can remove underused features from future product iterations or develop new products that highlight popular features.

Enabling an open, collaborative environment through a network of digital twins offers you the chance to transform engineering, operations, and everything else in between.

Create new business models

No enterprise is immune to industry-altering disruption. That’s why companies must constantly look for new ways to re-imagine existing business models and generate revenue.

Digital twins present an opportunity to do both.

Say you manufacture compressed air supply systems. In addition to selling your equipment and installing it at your customer’s site, you offer to maintain it throughout the asset life cycle and charge fees based on air consumption rather than a fixed rate.

With a digital twin network you share with your customer, you can monitor the condition of your asset around the clock and accurately track how much air your customer consumes. This reliable and transparent method ensures you’re always standing by to repair the asset, if necessary, and charging the proper amount of money each billing cycle.

Thinking outside the box and exploring innovative as-a-service business models is a surefire way to remain profitable in today’s ever-evolving digital world.

Digital Twin in Action

Here are two shining examples of companies winning with this exciting, new technology:

Stara

This Brazil-based tractor manufacturer, uses digital twins to modernize farming.

By outfitting its tractors with IoT sensors, the company can increase equipment performance. With real-time visibility into how its tractors operate, Stara can proactively prevent equipment malfunctions and improve asset uptime.

The company has also leveraged digital twins to create new business models. Stara launched a profitable new service that provides farmers with real-time insight detailing the optimal conditions for planting crops and improving farm yield.

Farmers have reduced seed use by 21% and fertilizer use by 19% thanks to Stara’s guidance.

Kaeser

This manufacturer of compressed air products, used digital twins to go from merely selling a product to selling a service.

Instead of installing equipment at a customer’s site and leaving operation to the customer, Kaeser maintains the asset throughout its lifecycle and charges fees based on air consumption rather than a fixed rate.

A digital twin network enables the company to monitor the condition of its equipment around the clock and measure customer air consumption. Real-time asset data helps Kaeser ensure equipment uptime and charge an accurate amount of money each billing cycle.

To date, the company has cut commodity costs by 30% and onboarded 50% of major vendors using digital twins.

 Replicating your business for the better

Digital twins give you the ability to enable data-driven decision making, automate business processes, increase collaboration, and create new business models. They help you improve partner collaboration so you can meet evolving customer demands efficiently and cost-effectively.

Source: Forbes

Will AI help cybersecurity or the hackers?

Will AI help cybersecurity or the hackers?

Like in any battle, the ability to harness new technologies can be a decisive factor in victory. In cybersecurity, that new technology is artificial intelligence, and it will benefit both sides.

 

Like in any battle, the ability to harness new technologies can be a decisive factor in victory. In cybersecurity, that new technology is artificial intelligence, and it will benefit both sides.

Source: Mashable

Will AI bring a new renaissance?

Will AI bring a new renaissance?

Artificial intelligence is becoming the fastest disruptor and generator of wealth in history. It will have a major impact on everything. Over the next decade, more than half of the jobs today will disappear and be replaced by AI and the next generation of robotics.

Artificial intelligence is becoming the fastest disruptor and generator of wealth in history. It will have a major impact on everything. Over the next decade, more than half of the jobs today will disappear and be replaced by AI and the next generation of robotics.

AI has the potential to cure diseases, enable smarter cities, tackle many of our environmental challenges, and potentially redefine poverty. There are still many questions to ask about AI and what can go wrong. Elon Musk recently suggested that under some scenarios AI could jeopardise human survival. 

AI's ability to analyse data and its accuracy is enormous. This will enable the development of smarter machines for business.

But at what cost and how will we control it? Society needs to seriously rethink AI's potentials, its impact to both our society and the way we live.

Artificial intelligence and robotics were initially thought to be a danger to be blue-collar jobs, but that is changing with white-collar workers – such as lawyers and doctors – who carry out purely quantitative analytical processes are also becoming an endangered species. Some of their methods and procedures are increasingly being replicated and replaced by software.

For instance, researchers at MIT's Computer Science and Artificial Intelligence Laboratory, Massachusetts General Hospital and Harvard Medical School developed a machine learning model to better detect cancer.

They trained the model on 600 existing high-risk lesions, incorporating parameters like, family history, demographics, and past biopsies. It was then tested on 335 lesions and they found it could predict the status of a lesion which 97 per cent accuracy, ultimately enabling the researchers to upgrade those lesions to cancer.

Traditional mammograms uncover suspicious lesions, then test their findings with a needle biopsy. Abnormalities would undergo surgeries, usually resulting in 90 per cent to be benign, rendering the procedures unnecessary. As the amount of data and other potential variables are considered, human clinicians cannot compete at the same level of AI.

So will AI take the clinicians job or will it just provide a better diagnostic tool, freeing up the clinicians to provide better connection with their patients?

Confusion around the various terminologies relating to AI can warp the conversation. Artificial general intelligence (AGI) is where machines can successfully perform any intellectual task that a human can do - sometimes referred to as “strong AI”, or “full AI”. That is where a machine can perform “general intelligent actions”.

Max Tegmark in his recent book Life 3.0, describes AI as a machine or computer that displays intelligence. This contrasts with natural intelligence which you and I and other animals display. Research of AI is the study of intelligent agents; devices which sense their own environment and take actions to maximise its chances of success.

Tegmark refers to Life 3.0 as a representation of our current stage of evolution. Life 1.0 referred to biological origins, or our hardware, which has been controlled by the process of evolution.

Life 2.0 is our cultural development of humanity. This refers to our software, which drives us and our minds. Education and knowledge has been a major influence on this stage of our journey, constantly being updated and upgraded. These versions of Life are based on survival of the fittest, our education and time.

Life 3.0 is the technological age of humanity. We have effectively reached the point where we can upgrade our hardware and software. Not to the levels of the movies, it may be possible in the future but that might be a while away. All these upgrades have been due to our use of technology, advanced materials and drugs that improve our bodies.

The first renaissance

This was a period between the 14th and 17th centuries. The Renaissance encompassed innovative flowering of Latin and vernacular literatures, beginning with a resurgence of learning based on classical sources. Theories proposed to account for its origins a characteristic, focusing on a variety of factors including the social and civic peculiarities.

Renaissance literally means ‘"rebirth’, a cultural movement that profoundly affected European intellectual life. This period was a time of exploration of many changes in society. People were able to ask and explore their questions.

A ‘Renaissance man was a person who is skilled in multiple disciplines, someone who has a broad base of knowledge. These people pursued multiple fields of studies. A good example of a Renaissance man of this period was Leonardo Da Vinci, a Master of Art, engineering, anatomy as well as many other disciplines with remarkable success. The Renaissance man shows skills in many matters.

Einstein was a genius of theoretical physics, but he was not necessarily a Renaissance man. In the past, universities students were encouraged to study the liberal arts. The idea being to give a more rounded education.

It is not the case that many of these students are polymaths. That of having a broad-based education would lead to a more developed mind. As indicated by Daniel Pink, a Whole New Mind, the Master of Fine Arts will become the MBA of the future.

The new renaissance

AI is going to free us from many ardours duties around what we do for work. Businesses that have embraced these changes will grow, others will go. Robotics and AI are starting to have major social and cultural impacts.

We are seeing more protests technology, people becoming activists. The inequality of pay to work is impacting many people. Taxi drivers are affected by Uber, hotels by Airbnb and many more, the rules have changed, and many are not happy. This situation draws a close parallel to the cottage industries of the industrial age. That impact brought the rise of the luddites that were led by John Lud.

The disenfranchised workers faced with innovation, industrial level of change and the destruction of their industry rings true today as it did for the luddites in the industrial age.

“Recently, the term neo-Luddism has emerged to describe opposition to many forms of technology. According to a manifesto drawn up by the Second Luddite Congress in 1996, neo-Luddism is “a leaderless movement of passive resistance to consumerism and the increasingly bizarre and frightening technologies of the Computer Age.” (Wikipedia)

We need to take this time as an opportunity to create a new Renaissance period, enabling more of us to become ‘Renaissance people’, using our creativity and innovative traits. Innovation is what businesses wants but computers struggle to master.

Jobs of the future will come from this aspect of humanity, but if we are not looking, we ignore the situation the neo-Luddites may have a point. Potentially creating a comparable situation as when the then luddites started to break the industrial looms.

This was criminalised in 1721, leading to the Frame Breaking Act of 1812 and the death penalty. Not to say we will get that far, but there are some already building their camps and weaponising themselves for just that eventuality.

So, what can we do?

We need to talk about AI and the future. We need to realise that the impacts are going to be eminence - that we need to plan. Jobs are and will change so you need to prepare. Innovation is a top priority for many organisations.

It can no longer be left to the realm of the geeks and techies. We all need to be more innovative and creative, it must increase exponentially and become a core competency. Innovation is a matter of a change in mindset, developing the right environment and circumstances.

We need to ask more questions, to find the right answer. This is an important skill that many have forgotten or lost. We can find many answers on Google but, without the right question they are worthless.

We need to explore the process of doing just that, asking the right question to achieve the right outcomes.

Get ready for AI and the future because the future is NOW!

Source: Cio

AI in banking

AI in banking

Artificial intelligence is a new approach to information discovery and decision-making. Inspired by the way the human brain processes information, draws conclusions, and codifies instincts and experiences into learning, it is able to bridge the gap between the intent of big data and the reality of practical decision-making.

Artificial intelligence is a new approach to information discovery and decision-making. Inspired by the way the human brain processes information, draws conclusions, and codifies instincts and experiences into learning, it is able to bridge the gap between the intent of big data and the reality of practical decision-making. Artificial Intelligence (AI), machine learning systems, and natural language processing are now no longer experimental concepts but potential business disrupters that can drive insights to aid real-time decision making. Each week there are new advancements, new technologies, new applications, and new opportunities in AI. It’s inspiring, but also overwhelming. That’s why I created this guide to help you keep pace with all these exciting developments. Whether you’re currently employed in the banking industry, working with Produvia or just pursuing an interest in the subject, there will always be something here to inspire you.

Today, banks and financial servicing companies must embrace artificial intelligence technologies in order to improve business engagement, automation, insights and strategies.

AI Ideas for Banking

There are many opportunities for artificial intelligence in the banking industry. Here are a few AI ideas to consider:

  1. Intelligent Mortgage Loan Approvals
    Imagine technology that pulls third-party data to verify applicant’s identity, determines whether the bank can offer pre-approval on the basis of a partial application, estimates property value, creates document files for title validation and flood certificate searches, determines loan terms on the basis on risk scoring, develops a strategy to improve conversation, provides real-time text and voice support via chatbot. (BCG, 2017) Imagine a system that approves mortgage loans by comparing the applicant’s finances with data for existing loan holders. Imagine software that calculates mortgage risk based on wide range of loan-level characteristics at origination (credit score, loan-to-value ratio, product type and features), as well as a number of variables describing loan performance (e.g., number of times delinquent in past year), several time-varying factors that describe the economic conditions a borrower faces, including both local variables such as housing prices, average incomes, and foreclosure rates at the zip code level, as well as national-level variables such as mortgage rates. (Justin Sirignano, 2016)
  2. Risk Management
    Imagine software that gains intelligence from various data sources such as credit scores, financial data, spending patterns. (FinExtra, 2017) Imagine technology that identifies a risk score of a customer based on his or her nationality, occupation, salary range, experience, industry he or she works for, and credit history. (Quora, 2017)
  3. Fraud Detection
    Imagine technology that establishes patterns based on the historical behaviour of account owners. When uncharacteristic transactions occur, an alert is generated indicating the possibility of fraud. (FinExtra, 2017) Imagine software that can detect fraudulent patterns by analyzing historical transaction data. (FeedzaiNymiZolozBiocatch)
    Imagine a system that detects suspicious transactions, voice recognition software that confirms the identity of a bank customer whose credit card information has been stolen, and cognitive-automation technology that recommends an action — perhaps via a chatbot — to that customer. (GCG, 2017) Imagine software that detects financial fraud using anomaly detection.
  4. Credit Risk Management
    Imagine software that allows for more accurate, instant credit decisions by analyzing news and business networks. This system can also be used to improve Early Warning Systems (EWS) and to provide mitigation recommendations. (Accenture, 2017)
  5. Risk and Finance Reporting
    Imagine Robotic Process Automation (RPA) which allows a business to map out simple, rule-based processes and have a computer carry them out on their behalf. Imagine a program that reads and understands unstructured data or text and makes subjective decisions in response, similar to a human. This system enables banks to meet regulatory reporting requirements at speed, whilst reducing costs. (Accenture, 2017)
  6. Customer Service Chatbot
    Imagine a banking chatbot that understands customer behaviour, tracks spending patterns and tailors recommendations on how to manage finances. Imagine a chatbot that helps customers perform routine banking transactions while offering simple insights on improving finance management. Imagine a bot that curates targeted offers and promotes relevant products and services, thereby increasing customer satisfaction. (FinExtra, 2017)
  7. Customer Engagement
    Imagine technology that improves customer understanding and activation through personalization, influencing desired actions. (Deloitte, 2017)
  8. Banking Automation
    Imagine software that automates repetitive, knowledge & natural language rich, human intensive decision processes. (Deloitte, 2017)
  9. Banking Insights
    Imagine technology that determines key patterns and relationships from billions of data sources in real-time to derive deep and actionable insights. (Deloitte, 2017)
  10. Shape Strategies
    Imagine software that builds a deep understanding of company, market dynamics, and disruptive trends to shape strategies. (Deloitte, 2017)
  11. Predict Cash at ATMs
    Imagine an algorithm that predicts the cash required at each of its ATMs across the country, combining this with route-optimization techniques to save money. (McKinsey, 2017)
  12. Detect Anti-Money Laundering (AML) Activity
    Imagine technology that detects anti-money laundering (AML) activity by tracing the true source of money and identifying disguised illegal cash flow. (FinExtra, 2017)
  13. Know-Your-Customer Checks
    Imagine technology provides continuous monitoring of transactions and is able to better identify if a particular transaction is worthy of follow up investigation, given the systems analytics of historical transaction patterns and behaviors. (Medium, 2017)

Practical AI In Banking

There are many banks that are now incorporating artificial intelligence technologies. Here are a few of our favourites:

  1. In Europe, more than a dozen banks have replaced older statistical- modeling approaches with machine-learning techniques and, in some cases, experienced 10 percent increases in sales of new products, 20 percent savings in capital expenditures, 20 percent increases
    in cash collections, and 20 percent declines in churn. The banks have achieved these gains by devising new recommendation engines
    for clients in retailing and in small and medium-sized companies. They have also built microtargeted models that more accurately forecast who will cancel service or default on their loans, and how best to intervene. (McKinsey, 2015)
  2. In Canada, a major Canadian Bank reduced watch list checks from 12 hours to less than 15 minutes, increased name checks from 2,500 to more than 40,000, reduced false positives by 75%, and realized ROI in 3 months. (IBM, 2017)
  3. A South American Bank improved efficiency by 60% by reducing administrative costs. They also reduced AML alerts by 90% which in turn increased accuracy by 60%. (IBM, 2017)
Back to the core of intelligence

Back to the core of intelligence

Two decades ago I (José Hernández-Orallo) started working on metrics of machine intelligence. By that time, during the glacial days of the second AI winter, few were really interested in measuring something that AI lacked completely.

Two decades ago I (José Hernández-Orallo) started working on metrics of machine intelligence. By that time, during the glacial days of the second AI winter, few were really interested in measuring something that AI lacked completely. And very few, such as David L. Dowe and I, were interested in metrics of intelligence linked to algorithmic information theory, where the models of interaction between an agent and the world were sequences of bits, and intelligence was formulated using Solomonoff’s and Wallace’s theories of inductive inference.

In the meantime, seemingly dozens of variants of the Turing test were proposed every year, the CAPTCHAs were introduced and David showed how easy it is to solve some IQ tests using a very simple program based on a big-switch approach. And, today, a new AI spring has arrived, triggered by a blossoming machine learning field, bringing a more experimental approach to AI with an increasing number of AI benchmarks and competitions (see a previous entry in this blog for a survey).

Considering this 20-year perspective, last year was special in many ways. The first in a series of workshops on evaluating general-purpose AI took off, echoing the increasing interest in the assessment of artificial general intelligence (AGI) systems, capable of finding diverse solutions for a range of tasks. Evaluating these systems is different, and more challenging, than the traditional task-oriented evaluation of specific systems, such as a robotic cleaner, a credit scoring model, a machine translator or a self-driving car. The idea of evaluating general-purpose AI systems using videogames had caught on. The arcade learning environment (the Atari 2600 games) or the more flexible Video Game Definition Language and associated competition became increasingly popular for the evaluation of AGI and its recent breakthroughs.

Last year also witnessed the introduction of a different kind of AI evaluation platforms, such as Microsoft’s Malmö, GoodAI’s School, OpenAI’s Gym and Universe, DeepMind’s Lab, Facebook’s TorchCraft and CommAI-env. Based on a reinforcement learning (RL) setting, these platforms make it possible to create many different tasks and connect RL agents through a standard interface. Many of these platforms are well suited for the new paradigms in AI, such as deep reinforcement learning and some open-source machine learning libraries. After thousands of episodes or millions of steps against a new task, these systems are able to excel, with usually better than human performance.

Despite the myriads of applications and breakthroughs that have been derived from this paradigm, there seems to be a consensus in the field that the main open problem lies in how an AI agent can reuse the representations and skills from one task to new ones, making it possible to learn a new task much faster, with a few examples, as humans do. This can be seen as a mapping problem (usually under the term transfer learning) or can be seen as a sequential problem (usually under the terms gradual, cumulative, incremental, continual or curriculum learning).

One of the key notions that is associated with this capability of a system of building new concepts and skills over previous ones is usually referred to as “compositionality”, which is well documented in humans from early childhood. Systems are able to combine the representations, concepts or skills that have been learned previously in order to solve a new problem. For instance, an agent can combine the ability of climbing up a ladder with its use as a possible way out of a room, or an agent can learn multiplication after learning addition.

In my opinion, two of the previous platforms are better suited for compositionality: Malmö and CommAI-env. Malmö has all the ingredients of a 3D game, and AI researchers can experiment and evaluate agents with vision and 3D navigation, which is what many research papers using Malmö have done so far, as this is a hot topic in AI at the moment. However, to me, the most interesting feature of Malmö is building and crafting, where agents must necessarily combine previous concepts and skills in order to create more complex things.

CommAI-env is clearly an outlier in this set of platforms. It is not a video game in 2D or 3D. Video or audio don’t have any role there. Interaction is just produced through a stream of input/output bits and rewards, which are just +1, 0 or -1. Basically, actions and observations are binary. The rationale behind CommAI-env is to give prominence to communication skills, but it still allows for rich interaction, patterns and tasks, while “keeping all further complexities to a minimum”.

When I was aware that the General AI Challenge was using CommAI-env for their warm-up round I was ecstatic. Participants could focus on RL agents without the complexities of vision and navigation. Of course, vision and navigation are very important for AI applications, but they create many extra complications if we want to understand (and evaluate) gradual learning. For instance, two equal tasks for which the texture of the walls changes can be seen as requiring higher transfer effort than two slightly different tasks with the same texture. In other words, this would be extra confounding factors that would make the analysis of task transfer and task dependencies much harder. It is then a wise choice to exclude this from the warm-up round. There will be occasions during other rounds of the challenge for including vision, navigation and other sorts of complex embodiment. Starting with a minimal interface to evaluate whether the agents are able to learn incrementally is not only a challenging but an important open problem for general AI.

Also, the warm-up round has modified CommAI-env in such a way that bits are packed into 8-bit (1 byte) characters. This makes the definition of tasks more intuitive and makes the ASCII coding transparent to the agents. Basically, the set of actions and observations is extended to 256. But interestingly, the set of observations and actions is the same, which allows many possibilities that are unusual in reinforcement learning, where these subsets are different. For instance, an agent with primitives such as “copy input to output” and other sequence transformation operators can compose them in order to solve the task. Variables, and other kinds of abstractions, play a key role.

 

This might give the impression that we are back to Turing machines and symbolic AI. In a way, this is the case, and much in alignment to Turing’s vision in his 1950 paper: “it is possible to teach a machine by punishments and rewards to obey orders given in some language, e.g., a symbolic language”. But in 2017 we have a range of techniques that weren’t available just a few years ago. For instance, Neural Turing Machines and other neural networks with symbolic memory can be very well suited for this problem.

By no means does this indicate that the legion of deep reinforcement learning enthusiasts cannot bring their apparatus to this warm-up round. Indeed they won’t be disappointed by this challenge if they really work hard to adapt deep learning to this problem. They won’t probably need a convolutional network tuned for visual pattern recognition, but there are many possibilities and challenges in how to make deep learning work in a setting like this, especially because the fewer examples, the better, and deep learning usually requires many examples.

As a plus, the simple, symbolic sequential interface opens the challenge to many other areas in AI, not only recurrent neural networks but techniques from natural language processing, evolutionary computation, compression-inspired algorithms or even areas such as inductive programming, with powerful string-handling primitives and its appropriateness for problems with very few examples.

I think that all of the above makes this warm-up round a unique competition. Of course, since we haven’t had anything similar in the past, we might have some surprises. It might happen that an unexpected (or even naïve) technique could behave much better than others (and humans) or perhaps we find that no technique is able to do something meaningful at this time.

I’m eager to see how this round develops and what the participants are able to integrate and invent in order to solve the sequence of micro and mini-tasks. I’m sure that we will learn a lot from this. I hope that machines will, too. And all of us will move forward to the next round!

Source: Medium

History of Chatbots

History of Chatbots

Are you familiar with the Turing Test? For the uninitiated, the Turing Test was developed by Alan Turing, the original computer nerd, in 1950. The idea is simple: for a machine to pass the Turing Test, it must exhibit intelligent behavior indistinguishable from that of a human being.

Turing Test

Are you familiar with the Turing Test? For the uninitiated, the Turing Test was developed by Alan Turing, the original computer nerd, in 1950. The idea is simple: for a machine to pass the Turing Test, it must exhibit intelligent behavior indistinguishable from that of a human being.

The test is usually conceptualized with one person—the interrogator—speaking through a computerized interface with two different entities, hidden from view. One is an actual computer, one is a human being. If the interrogator is unable to determine which is which, the computer has passed the Turing Test.

Despite experts working on this problem for nearly seventy years, machines able to even approach success at the Turing Test have been rare. However, not being able to strictly pass the Turing Test doesn’t mean these systems—what we call chatbots today—are useless. They can handle simple tasks like taking food orders, answering basic customer support questions and offering suggestions based on a request (like Siri and Alexa). They serve an important and growing role in our society, and it’s worth looking at how they’ve developed to this point.

 

ELIZA

The first true chatbot was called ELIZA, developed in the mid-1960s by Joseph Weizenbaum at MIT. On a basic level, its design allowed it to converse through pattern matching and substitution. In the same way someone can listen to you, then offer a response that involves an idea you didn’t specifically mention (“Where should we eat?” “I like that Thai place on the corner.”), ELIZA was programmed to understand patterns of human communication and offer responses that included the same type of substitutions. This gave the illusion that ELIZA understood the conversation.

The most famous version of ELIZA used the DOCTOR script. This allowed it to simulate a Rogerian psychotherapist, and even today it gives responses oddly similar to what we might find in a therapy session—it responds to inputs by trying to draw more information out of the speaker, rather than offer concrete answers. By modern standards, we can tell the conversation goes off the rails quickly, but its ability to maintain a conversation for as long as it does is impressive when we remember it was programmed using punch cards.

 

PARRY

The next noteworthy chatbot came relatively soon afterward, in 1972. Sometimes referred to as “ELIZA with attitude”, PARRY simulated the thinking of a paranoid person or paranoid schizophrenic. It was designed by a psychiatrist, Kenneth Colby, who had become disenchanted with psychoanalysis due to its inability to generate enough reliable data to advance the science.

Colby believed computer models of the mind offered a more scientific approach to the study of mental illness and cognitive processes overall. After joining the Stanford Artificial Intelligence Laboratory, he used his experience in the psychiatric field to program PARRY, a chatbot that mimicked a paranoid individual—it consistently misinterpreted what people said, assumed they had nefarious motives, were always lying, and could not be allowed to inquire into certain aspects of PARRY’s “life”. While ELIZA was never expected to mimic human intelligence—although it did occasionally fool people—PARRY was a much more serious attempt at creating an artificial intelligence, and in the early 1970s, it became the first machine pass a version of the Turing Test.

 

Dr. Sbaitso and A.L.I.C.E.

The 1990s saw the advent of two more important chatbots. First was a chatbot designed to actually speak to you: Dr. Sbaitso. Although similar to previous chatbots, with improved pattern recognition and substitution programming, Dr. Sbaitso became known for its weird digitized voice that sounded not at all human, yet did a remarkable job of speaking with correct inflection and grammar. Later, in 1995, A.L.I.C.E. came along, inspired by ELIZA. Its heuristic matching patterns proved a substantial upgrade on previous chatbots; although it never passed a true Turing Test, upgrades to A.L.I.C.E.’s algorithm made it a Loebner Prize winner in 2000, 2001, and 2004.

 

Speaking of the Loebner Prize

Since the invention of ELIZA and PARRY, chatbot technology has continued to improve; however, the most notable contribution of the last thirty years has arguably come in the form of the Loebner Prize. Instituted in 1991, the annual competition awards prizes to the most human-like computer programs, continuing to the present day. Initially the competition required judges to have highly restricted conversations with the chatbots, which led to a great deal of critique; for example, the rules initially required judges to limit themselves to “whimsical conversation”, which played directly into the odd responses often generated by chatbots. Time limits also worked against truly testing the bots, as only so many questions could be asked in five minutes or fewer given the less-than-instant response speeds inherent in computers of that era. One critic, Marvin Minsky, even offered a prize in 1995 to anyone who could stop the competition.

However, the restrictions of the early years were soon lifted, and from the mid-1990s on there have been no limitations placed on what the judges discuss with the bots. Chatbot technology improves every year in part thanks to the Loebner Prize, as programmers chase a pair of one-time awards that have yet to be won. The first is $25,000 for the first program that judges cannot distinguish from a human to the extent that it convinces judges the human is the computer. The other is $100,000 for the first program to pass a stricter Turing Test, where it can decipher and understand not just text, but auditory and visual input as well. Pushing AI development to be capable of this was part of Loebner’s goal in starting the competition; as such, once the $100,000 prize is claimed, the competition will end.

Siri and Alexa

Of course, as important as these goals are, chatbots have been developed with other goals in mind. Siri and Alexa, for example, are artificial intelligences and make no attempt to fool us otherwise; Apple and Amazon, respectively, improve them by enhancing their ability to find relevant answers to our questions. In addition, many of us are familiar with Watson, the computer that competed on Jeopardy! It works not by attempting to be human, but by processing natural language and using that “understanding” to find more and more information online. The process proved very successful—in 2011, Watson beat a pair of former Jeopardy! champions.

We should also note that not all chatbot experiments are successful. The most recent failure, and certainly the most high-profile was Tay, Microsoft’s Twitter-based chatbot. The intent was for Tay to interact with Twitter users and learn how to communicate with them. Unfortunately, in less than a day, Tay’s primary lesson was how to be incredibly racist, and Microsoft shut down the account.

Even in that negative instance, however, the technology showed it was definitely capable of learning. In the case of Tay, and anyone else seeking to create something similar, the next task is to work on how to filter bad lessons, or tightly control its learning sources. More broadly speaking, all of these examples show how chatbots have evolved, continue to evolve, and are certainly something we should expect to see more and more in the coming years and decades.

Source: Chatbotpack

The AI skills crisis & how to close the gap

The AI skills crisis & how to close the gap

 Now that nearly every company is considering how artificial intelligence (AI) applications can positively impact their businesses, they are on the hunt for professionals to help them make their vision a reality. According to research done by Glassdoor, data scientists have the No. 1 job in the United States. The survey looked at salary, job satisfaction and the number of job openings. If you have recent experience looking for AI specialists to join your team, it’s quite clear that we’re facing an AI skills crisis. In order to move AI projects from ideation into implementation, companies will need to determine how to close the AI skills gap so they have experts on their team to get the job done.

 Now that nearly every company is considering how artificial intelligence (AI) applications can positively impact their businesses, they are on the hunt for professionals to help them make their vision a reality. According to research done by Glassdoor, data scientists have the No. 1 job in the United States. The survey looked at salary, job satisfaction and the number of job openings. If you have recent experience looking for AI specialists to join your team, it’s quite clear that we’re facing an AI skills crisis. In order to move AI projects from ideation into implementation, companies will need to determine how to close the AI skills gap so they have experts on their team to get the job done.

 

Factors that contribute to the AI talent shortage

One report suggested there about 300,000 AI professionals worldwide, but millions of roles available. While these are speculative figures, the competitive salaries and benefits packages and the aggressive recruiting tactics rolled out by firms to recruit AI talent would suggest the supply of AI talent is nowhere near matching up to the demand.

As the democratization of AI and deep learning applications expands—possible not just for tech giants but now viable for small- and medium-sized businesses—the demand for AI professionals to do the work has ballooned as well. The C-suite and corporate management’s excitement for AI’s various applications is building and then once they have bought into the concept (which is happening much more rapidly), they want to make it real right away.

The 2018 “How Companies Are Putting AI to Work Through Deep Learning” survey from O’Reilly reveals the AI skills gap is the largest barrier to AI adoption, although data challenges, company culture, hardware and other company resources are also impediments. These results parallel a recent Ernst & Young poll that confirmed 56% of senior AI professionals believed the lack of qualified AI professionals was the single biggest barrier to AI implementation across business operations.

Another reason for the AI skills crisis is that our academic and training programs just can’t keep up with the pace of innovation and new discoveries with AI. Not only do AI professionals need official training, they need on-the-job experience. Therefore, there aren’t enough experienced AI professionals to step into the leadership roles required by organizations who are just beginning to adopt AI strategies into their operations.

Source: Forbes

Blockchain technology: “We aspire to make the EU the leading player

Blockchain technology: “We aspire to make the EU the leading player

Blockchain technology is increasingly being used for anything from crypto currencies to casting votes. Parliament is working on a public policy to stimulate its development.

Blockchain technology is increasingly being used for anything from crypto currencies to casting votes. Parliament is working on a public policy to stimulate its development.

Blockchain technology is based on digital ledgers, public records that can be used and shared simultaneously. The technology is probably best known as being the basis for Bitcoin and other crypto currencies, but it is also used in many other sectors, ranging from creative industries to public services.

 

MEPs now want to help create a public policy that supports the development of blockchain and other related technologies.

 

"Disruptive element"

 

Greek S&D member Eva Kaili has written a resolution, which was adopted by Parliament's  on 16 May. In it she call for “open-minded, progressive and innovation-friendly regulation”.

 

However, the MEP warned that the technology could lead to significant changes. “Blockchain and distributed ledger Technologies in general have a strong disruptive element that will affect many sectors," she said. "Financial services is just one." The resolution also looked at the effects of the technologies leading to fewer intermediaries in other sectors such as energy, health care, education, creative industries as well as the public sector.

 

Kaili.is also the chair of the Science and Technology Options Assessment panel, which provides MEPs with independent, high-quality and scientifically impartial studies and information to help assess the impact of new technologies.

Making the EU the leading player

 

The EU has an important role to play in cultivating this technology, said Kaili.  “We aspire to make EU the leading player in the field of blockchain," she said. "We experience a strong entrepreneurial interest in blockchain. We, as regulators, need to make sure that all this effort will be embraced by the necessary institutional and legal certainty."

 

Another concern is the impact the technology could have on people and their data. Kaili said that as technology evolves, the risks do too. “It is not smart to regulate the technology per se, but rather its uses and the sectors that adopt this technology in their business models. Consumer protection and investor protection come first.”

 

Investment

 

The EU has already been promoting the technology. For example, it has already invested more than €80 million in projects supporting the use of blockchain. The European Commission has said around €300 million more will be allocated by 2020.

 

In addition the Commission launched the EU Blockchain Observatory and Forum in February 2018.

  

Next steps

 

All MEPs will have the opportunity to vote on the resolution during an upcoming plenary session. If adopted, the resolution will be forwarded to the European Commission for consideration.

 Source: Europarl

Artificial Intelligence helps to predict the likelihood of life on other planets

Artificial Intelligence helps to predict the likelihood of life on other planets

Developments in artificial intelligence may help us to predict the probability of life on other planets, according to new work by a team based at Plymouth University. The study uses artificial neural networks (ANNs) to classify planets into five types, estimating a probability of life in each case, which could be used in future interstellar exploration missions. The work is presented at the European Week of Astronomy and Space Science (EWASS) in Liverpool on 4 April by Mr Christopher Bishop.

Developments in artificial intelligence may help us to predict the probability of life on other planets, according to new work by a team based at Plymouth University. The study uses artificial neural networks (ANNs) to classify planets into five types, estimating a probability of life in each case, which could be used in future interstellar exploration missions. The work is presented at the European Week of Astronomy and Space Science (EWASS) in Liverpool on 4 April by Mr Christopher Bishop.

Artificial neural networks are systems that attempt to replicate the way the human brain learns. They are one of the main tools used in machine learning, and are particularly good at identifying patterns that are too complex for a biological brain to process.

The team, based at the Centre for Robotics and Neural Systems at Plymouth University, have trained their network to classify planets into five different types, based on whether they are most like the present-day Earth, the early Earth, Mars, Venus or Saturn's moon Titan. All five of these objects are rocky bodies known to have atmospheres, and are among the most potentially habitable objects in our Solar System.

Mr Bishop comments, "We're currently interested in these ANNs for prioritising exploration for a hypothetical, intelligent, interstellar spacecraft scanning an exoplanet system at range."

He adds, "We're also looking at the use of large area, deployable, planar Fresnel antennas to get data back to Earth from an interstellar probe at large distances. This would be needed if the technology is used in robotic spacecraft in the future."

Atmospheric observations -- known as spectra -- of the five Solar System bodies are presented as inputs to the network, which is then asked to classify them in terms of the planetary type. As life is currently known only to exist on Earth, the classification uses a 'probability of life' metric which is based on the relatively well-understood atmospheric and orbital properties of the five target types.

Bishop has trained the network with over a hundred different spectral profiles, each with several hundred parameters that contribute to habitability. So far, the network performs well when presented with a test spectral profile that it hasn't seen before.

"Given the results so far, this method may prove to be extremely useful for categorising different types of exoplanets using results from ground-based and near Earth observatories" says Dr Angelo Cangelosi, the supervisor of the project.

The technique may also be ideally suited to selecting targets for future observations, given the increase in spectral detail expected from upcoming space missions such ESA's Ariel Space Mission and NASA's James Webb Space Telescope.

Source: Sciencedaily

Deep Learning comes full circle

Deep Learning comes full circle

Artificial intelligence drew much inspiration from the human brain but went off in its own direction. Now, AI has come full circle and is helping neuroscientists better understand how our own brains work.

 

Artificial intelligence drew much inspiration from the human brain but went off in its own direction. Now, AI has come full circle and is helping neuroscientists better understand how our own brains work.

For years, the people developing artificial intelligence drew inspiration from what was known about the human brain, and it has enjoyed a lot of success as a result. Now, AI is starting to return the favor.

Although not explicitly designed to do so, certain artificial intelligence systems seem to mimic our brains’ inner workings more closely than previously thought, suggesting that both AI and our minds have converged on the same approach to solving problems. If so, simply watching AI at work could help researchers unlock some of the deepest mysteries of the brain.

“There’s a real connection there,” said Daniel Yamins, assistant professor of psychology. Now, Yamins, who is also a faculty scholar of the Stanford Neurosciences Institute and a member of Stanford Bio-X, and his lab are building on that connection to produce better theories of the brain – how it perceives the world, how it shifts efficiently from one task to the next and perhaps, one day, how it thinks.

A vision problem for AI

Artificial intelligence has been borrowing from the brain since its early days, when computer scientists and psychologists developed algorithms called neural networks that loosely mimicked the brain. Those algorithms were frequently criticized for being biologically implausible – the “neurons” in neural networks were, after all, gross simplifications of the real neurons that make up the brain. But computer scientists didn’t care about biological plausibility. They just wanted systems that worked, so they extended neural network models in whatever way made the algorithm best able to carry out certain tasks, culminating in what is now called deep learning.

Then came a surprise. In 2012, AI researchers showed that a deep learning neural network could learn to identify objects in pictures as well as a human being, which got neuroscientists wondering: How did deep learning do it?

The same way the brain does, as it turns out. In 2014, Yamins and colleagues showed that a deep learning system that had learned to identify objects in pictures – nearly as well as humans could – did so in a way that closely mimicked the way the brain processes vision. In fact, the computations the deep learning system performed matched activity in the brain’s vision-processing circuits substantially better than any other model of those circuits.

Around the same time, other teams made similar observations about parts of the brain’s vision– and movement-processing circuits, suggesting that given the same kind of problem, deep learning and the brain had evolved similar ways of coming up with a solution. More recently, Yamins and colleagues have demonstrated similar observations in the brain’s auditory system.

On one hand, that’s not a big surprise. Although the technical details differ, deep learning’s conceptual organization is borrowed directly from what neuroscientists already knew about the organization of neurons in the brain.

But the success of Yamins and colleagues’ approach and others like it depends equally as much on another, more subtle choice. Rather than try to get the deep learning system to directly match what the brain does at the level of individual neurons, as many researchers had done, Yamins and colleagues simply gave their deep learning system the same problem: Identify objects in pictures. Only after it had solved that problem did the researchers compare how deep learning and the brain arrived at their solutions – and only then did it become clear that their methods were essentially the same.

“The correspondence between the models and the visual system is not entirely a coincidence, because one directly inspired the other,” said Daniel Bear, a postdoctoral researcher in Yamins’ group, “but it’s still remarkable that it’s as good a correspondence as it is.”

One likely reason for that, Bear said, is natural selection and evolution. “Basically, object recognition was a very evolutionarily important task” for animals to solve – and solve well, if they wanted to tell the difference between something they could eat and something that could eat them. Perhaps trying to do that as well as humans and other animals do – except with a computer – led researchers to find essentially the same solution.

Seek what the brain seeks

Whatever the underlying reason, insights gleaned from the 2014 study led to what Yamins calls goal-directed models of the brain: Rather than try to model neural activity in the brain directly, instead train artificial intelligence to solve problems the brain needs to solve, then use the resulting AI system as a model of the brain. Since 2014, Yamins and collaborators have been refining the original goal-directed model of the brain’s vision circuits and extending the work in new directions, including understanding the neural circuits that process inputs from rodents’ whiskers.

In perhaps the most ambitious project, Yamins and postdoctoral fellow Nick Haber are investigating how infants learn about the world around them through play. Their infants – actually relatively simple computer simulations – are motivated only by curiosity. They explore their worlds by moving around and interacting with objects, learning as they go to predict what happens when they hit balls or simply turn their heads. At the same time, the model learns to predict what parts of the world it doesn’t understand, then tries to figure those out.

While the computer simulation begins life – so to speak – knowing essentially nothing about the world, it eventually figures out how to categorize different objects and even how to smash two or three of them together. Although direct comparisons with babies’ neural activity might be premature, the model could help researchers better understand how infants use play to learn about their environments, Haber said.

On the other end of the spectrum, models inspired by artificial intelligence could help solve a puzzle about the physical layout of the brain, said Eshed Margalit, a graduate student in neurosciences. As the vision circuits in infants’ brains develop, they form specific patches – physical clusters of neurons – that respond to different kinds of objects. For example, humans and other primates all form a face patch that is active almost exclusively when they look at faces.

Exactly why the brain forms those patches, Margalit said, isn’t clear. The brain doesn’t need a face patch to recognize faces, for example. But by building on AI models like Yamins’ that already solve object recognition tasks, “we can now try to model that spatial structure and ask questions about why the brain is laid out this way and what advantages it might give an organism,” Margalit said.

Closing the loop

There are other issues to tackle as well, notably how artificial intelligence systems learn. Right now, AI needs much more training – and much more explicit training – than humans do in order to perform as well on tasks like object recognition, although how humans succeed with so little data remains unclear.

A second issue is how to go beyond models of vision and other sensory systems. “Once you have a sensory impression of the world, you want to make decisions based on it,” Yamins said. “We’re trying to make models of decision making, learning to make decisions and how you interface between sensory systems, decision making and memory.” Yamins is starting to address those ideas with Kevin Feigelis, a graduate student in physics, who is building AI models that can learn to solve many different kinds of problems and switch between tasks as needed, something very few AI systems are able to do.

In the long run, Yamins and the other members of his group said all of those advances could feed into more capable artificial intelligence systems, just as earlier neuroscience research helped foster the development of deep learning. “I think people in artificial intelligence are realizing there are certain very good next goals for cognitively inspired artificial intelligence,” Haber said, including systems like his that learn by actively exploring their worlds. “People are playing with these ideas.”

Source: Stanford

The impact of Big data on supply chain

The impact of Big data on supply chain

You receive a notification on your phone that a critical shipment from your China factory has missed its filing deadline with the customs broker. Your logistics manager is alerted that there is an 80% chance that the components he’s waiting for are likely to be delayed another 48 hours by excessive port traffic and your GTM software advises diverting the shipment to an alternate port facility. Your compliance officer is informed that there is a 95% chance that a shipment of parts from Malaysia is likely to be held for up to three days to be subjected to a detailed customs inspection.

You receive a notification on your phone that a critical shipment from your China factory has missed its filing deadline with the customs broker. Your logistics manager is alerted that there is an 80% chance that the components he’s waiting for are likely to be delayed another 48 hours by excessive port traffic and your GTM software advises diverting the shipment to an alternate port facility. Your compliance officer is informed that there is a 95% chance that a shipment of parts from Malaysia is likely to be held for up to three days to be subjected to a detailed customs inspection.

If you think this type of information would be of great assistance to your supply chain business planning and operations, you are not alone. It is this type of integrated data and communications that are becoming the backbone of the Big Data led revolution underway in supply chain.

The human brain can only process and make use of a limited amount of information before it becomes overwhelmed and unable to effectively recognise patterns and trends. But powerful algorithms and the software platforms they drive can take in almost unlimited numbers of data points and process them to generate insights impossible for an individual or even an entire organisation of individuals to identify. And powering this technology-driven transformation of supply chain is Big Data.

Big Data vs small data

To really understand how technology is transforming supply chain, it is important to understand how Big Data differs from any other form of information gathering. Data has always been crucial to efficient supply chain operations so what has actually changed in recent years? How is “Big Data” different from the analysis of “small data” that has always occurred in the industry?

Big Data refers to sets of both structured and unstructured data with so much volume that traditional data processing systems are inadequate to cope with it all. It can be further defined by some of the basic properties that apply to it:

  • Variety – data being generated from a wide number of varied sources
  • Volume – while there is no set distinction between where small data stops and Big Data starts, Big Data requires large storage requirements for the data, often measured in many multiples of terabytes
  • Velocity – the speed at which the data can be acquired, transferred and stored
  • Complexity – difficulties encountered in forming relevant relationships in data, especially when it is taken from multiple sources
  • Value – the degree to which querying the data will result in generating beneficial outcomes

The most important property related to Big Data is as the name implies, volume. We normally think of data purely in terms of text or numbers but it can include everything from the billions of emails, images, and tweets generated every day. In fact, data generation is expanding at a rate that doubles every two years. And human and machine-generated data is growing at 10 times the rate of traditional business data. IT World Canada projects that by 2020, you would need a stack of iPad Air tablets extending from the earth to the moon to store the world’s digital data.

But the real focus behind a preference for Big Data analysis over small data systems is the ability to uncover hidden trends and relationships in both structured and unstructured data. In most cases, using small data collection and analytics processes simply cannot identify crucial information in a timely manner to allow key decisions to be made or opportunities to be taken advantage of. In other cases, using small data systems is simply a waste of resources and leads to disruptions to supply chain operations.

By contrast, if used correctly Big Data is the key to enhancing supply chain performance by increasing visibility, control, agility, and responsiveness. Making decisions based on high quality information in context can benefit the full range of supply chain operations – from demand forecasting, inventory and logistics planning, execution, shipping, and warehouse management.

Big Data possibilities

Big Data analytics becomes a vital tool for making sense of the huge volumes of data that are produced every day. This data comes from a whole range of activities undertaken by people associated with supply chain, whether they be customers, suppliers, or your own staff. The range and volume of this data is continuously increasing, with billions of data points generated by sources we see as directly linked to supply chain such as network nodes and transaction and shipping records as well as other areas that more indirectly impact supply chains such as retail channels and social media content.

But it is increasingly becoming necessary to harness this data in order to remain competitive. This is evident from statements made by people such as Anthony Coops, Asia Pacific Data and Analytics Leader at KPMG Australia, who believes that “Big Data is certainly enabling better decisions and actions, and facilitating a move away from gut feel decision making.” At the same time, he recognises that solutions need to be put in place that allows for people and organisations to have complete faith in the data so that managers can really trust in the analytics and be confident in their decision making.

The need for confidence in the analytics is evident when considering the examples such as where GTM software has the information and capabilities to advise ahead of time to divert shipping stock to an alternate port or that a product is likely to be held up in customs. These types of decisions have potentially large financial consequences but when implemented correctly, it is easy to see how supply chain operational efficiency can be significantly boosted by effective use of Big Data analytics.

Many organisations are also using Big Data solutions to support integrated business planning and to better understand market trends and consumer behaviours. The integration of a range of market, product sales, social media trends, and demographic data from multiple data sources provides the capability to accurately predict and plan numerous supply chain actions.

IoT and AI-based analytics are used to predict asset maintenance requirements and avoid unscheduled downtime. IoT can also provide real-time production and shipping data while GPS driven data combined with traffic and weather information allows for dynamically planned and optimised shipping and delivery routes. These types of examples provide a glimpse into the possibilities and advantages that Big Data can offer in increasing the agility and efficiency of supply chain operations.

Disruptive technologies

What is driving these possibilities is the development of numerous disruptive technologies as well as the integration of both new and existing technologies to create high-quality networks of information. Disruptive technologies impact the way organisations operate by forcing them to deal with new competitive platforms. They also provide them with opportunities to enter new markets or to change the company’s competitive status. By identifying key disruptive technologies early, supply chain organisations can not only be better placed to adapt to changing market conditions, they can also gain a distinct advantage over others in the industry that are reluctant to embrace change.

In terms of Big Data based disruptive technologies, these are largely driven by the effects of constantly evolving and emergent internet technologies such as the Internet of Things combined with increased computing power, AI and machine learning based analytics platforms, and fast, pervasive digital communications. These technologies then act as drivers that spawn new ways of managing products, assets, and staff as well as generating new ways of thinking about organisational structures and workflows.

IoT

After being talked about for many years, we are now starting to see the Internet of Things really taking shape. There will be a thirty-fold increase in the number of Internet-connected physical devices by 2020 and this will significantly impact the ways that supply chains operate.

IoT allows for numerous solutions to intelligently connect systems, people, processes, data, and devices via a network of connected sensors. Through improved data collection and intelligence, supply chain will benefit from greater automation of the manufacturing and shipping process becomes possible through enhanced visibility of activities from the warehouse to the customer.

Cloud-based GPS and Radio Frequency Identification (RFID) technologies, which provide location, product identification and other tracking information play a key role in the IoT landscape. Sensors can be used to provide a wealth of information targeted to specific niches within supply chain such as fresh produce distribution where temperature or humidity levels can be precisely tracked along the entire journey of a product. Data gathered from GPS and RFID technologies also facilitates automated shipping and delivery processes by precisely predicting the time of arrival.

Big Data analytics

Big Data analytics encompasses the qualitative and quantitative techniques that are used to generate insights to enhance productivity. The more supply chain technologies are reliant on Big Data, either in their business model or as a result of their impact on an organisation, the more organisations have to rely on the effective use of Big Data analytics to help them make sense of the volumes of data being generated. Analytics also helps to make it possible to understand the processes and strategies used by competitors across the industry. Using analytics effectively allows an organisation to make the best decisions to ensure they stay at the forefront of their particular market sector.

As corporations face financial pressures to increase profit margins and customer expectation pressures to shorten delivery times. the importance of Big Data analytics continues to grow. A Gartner, Inc. study put the 2017 business intelligence and analytics market at a value over USD$18 billion, while the sales of prescriptive analytics software is estimated to grow from approximately USD$415 million in 2014 to USD$1.1 billion in 2019.

Over time, the effectiveness and capabilities of analytics software also continue to improve as machine learning-based technologies take forecast data and continually compare it back to real operational and production data. This means that the longer an organisation operates its analytics software, the iterative nature of artificial intelligence powered algorithms means that the performance and value of the software improve over time. This leads to benefits such as more accurate forecasts of shipping times or supplier obstacles and bottlenecks.

Consumer behaviour analysis

Although it may not initially seem as vital to supply chain as other disruptive technologies, consumer behaviour analysis can have a huge impact on businesses working in supply chain, especially e-commerce businesses. Known as clickstream analysis, large amounts of company, industry, product, and customer information can be gathered from the web. Various text and web mining tools and techniques are then used to both organise and visualise this information.

By analysing customer clickstream data logs, web analytics tools such as Google Analytics can provide a trail of online customer activities and provide insights on their purchasing patterns. This allows more accurate seasonal forecasts to be generated that can then drive inventory and resourcing plans. This type of data is extremely valuable and is crucial for any organisations operating in the e-commerce space. While retailers and consumer companies have always collected data on buying patterns, the ability to pull together information from potentially thousands of different variables that have traditionally been collected in silos provides enormous economic opportunities.

Potential drawbacks and challenges

Despite the huge opportunities presented by implementing Big Data powered solutions, there can be intimidating barriers to entry when it comes to putting in place Big Data collection and analytics solutions. This can emerge across a range of areas including the complexities around data collection and the difficulties of putting in place the technologies and infrastructure needed to turn that data into useful insights.

Getting complete buy-in

One impediment to adopting a holistic Big Data approach centres around having unified support at all levels of your company to adopt comprehensive Big Data systems. Management commitment and support are crucial and large-scale initiatives of this type usually occur from the top down. However, Big Data analytics type initiatives usually originate at mid-level, from people who actually collect and use data day to day. This means that for this issue, the need for implementation must often be sold upwards. In some cases, upselling the importance of Big Data to management that doesn’t understand why that type of expense is necessary is extremely challenging.

Sourcing clean data

One of the other main challenges is undoubtedly sourcing appropriate and consistent data. There’s no use getting high-quality data if it doesn’t directly apply to your particular market sector. Nor is there much benefit to be gained from obtaining high-quality data but being unable to consistently source it at the same regularity to enable it to build a long-term profile of the company’s operations and market forces. These challenges are often related to technical issues such as integration with previously siloed data or data security concerns.

Richard Sharpe, CEO of Competitive Insights, a supply chain analytics company, believes that the data quality problem is a complex issue that can have many different causes. However, he believes that these challenges can be overcome by management having a clear understanding of what they’re trying to achieve. “You have to show that what you’re ultimately trying to do with supply chain data analytics is to make the enterprise more successful and profitable.” This then leads to support being provided by company leadership who, in tandem with operations managers, can develop the processes required to govern quality data collection. This includes proper consultation with subject matter experts who can help ensure that all data is properly validated.

Managing data volumes

New technologies make it possible for supply chain organisations to collect huge volumes of information from an ever-expanding number of sources. These data points can quickly run into the billions, making it challenging to analyse with any level of accuracy or lead directly to innovation and improvement.

This means that despite many organisations embracing Big Data strategies, many do not actually derive sustainable value from the data they’re accumulating because they begin to drown in the sheer volume of data or don’t have the appropriate software and management tools to make use of it. A common phrase used to summarise this effect is “paralysis by analysis”. Without a thorough understanding of the technologies and systems needed to process and store the data collected, this can be an easy condition for an organisation to become afflicted by.

Building the infrastructure

Companies need to invest in the right technologies to have a true 360-degree view of their business. And in many cases, these technologies can involve large initial capital outlays. Getting the infrastructure in place is key to being able to collect, process, and analyse data that enables you to track inventory, assets and materials in your supply chain.

Putting in place the infrastructure may also require additional training expenses, so that staff are properly trained in how to use new software platforms or to maintain sensors and other new IoT devices. In some cases, this will extend to requiring hiring new talent capable of using and interpreting new analytical tools.

Conclusion

Big Data offers huge opportunities to supply chain organisations, as vital information contained within multiple data sources can now be consolidated and analysed. These new perspectives can reveal the insights necessary to understand and solve problems that were previously considered too complex. New insights can also encourage organisations to scale intelligent systems across all activities in the supply chain, embedding intelligence in every part of the business.

There is also no doubt that implementing comprehensive Big Data solutions can involve new and significant challenges. However, once the new infrastructure and processes are in place, the nature of modern Cloud-based networks allows for data to be accessed easily from anywhere at any time. It also allows for other benefits beyond cost reduction and production gains to be realised over time, such as ongoing rather than just one-off efficiency gains and improved transparency and compliance tracking across the entire organisation.

Bastian Managing Director, Tony Richter, is a supply chain industry expert with 7+ years executing senior supply chain search across APAC. He works exclusively with a small portfolio of clients and prides himself on the creation of a transparent, credible, and focused approach. This ensures long-term trust can be established with all clients and candidates.

Source: Bastian Consulting

What is AI?

What is AI?

There is a mountain of hype around big data, artificial intelligence (AI), and machine learning. It’s a bit like kissing in the schoolyard – everyone is talking about it, but few are really doing it, and nobody is doing it well (shoutout to my friend Steve Totman at Cloudera for that line). There is certainly broad consensus that organizations need to be monetizing their data. But with all the noise around these new technologies, I think many business leaders are left scratching their heads about what it all means.

There is a mountain of hype around big data, artificial intelligence (AI), and machine learning. It’s a bit like kissing in the schoolyard – everyone is talking about it, but few are really doing it, and nobody is doing it well (shoutout to my friend Steve Totman at Cloudera for that line). There is certainly broad consensus that organizations need to be monetizing their data. But with all the noise around these new technologies, I think many business leaders are left scratching their heads about what it all means.

Given the huge diversity of applications and opinions on this topic, it may be folly, but I’d like to attempt to provide a practical, useful definition of artificial intelligence. While my definition probably won’t win any accolades for theoretical accuracy, I believe that it will provide a useful framework for talking about the specific actions that an organization needs to take in order to make the most of their data.

The theoretical definition

If you asked a computer scientist (or Will Smith), AI is what you get when you create a computer that is capable of thinking for itself. It’s Hal from 2001: A Space Odyssey or Lt. Commander Data from Star Trek: The Next Generation (two of the greatest masterpieces of all time). These computers are self-aware: thinking, independent machines that are (unfortunately) very likely to take over the world.

While that definition may be strictly accurate from the ivory tower, it’s not particularly practical. No scientist created such a thing, and no business is really considering utilizing such an entity in their business model.

Laying aside that definition, then, let’s look to something much more practical that can actually move the conversation forward in business.

AI is not machine learning

There are two main concepts, according to my definitions, that are important. AI is one, and I shall define it shortly. Machine learning is the second. There’s just as much confusion about the definition of machine learning as there is about AI, and I think it’s important to point out that they’re not the same.

Machine learning is known by other names. Harvard Business Review called it data science, and dubbed it the sexiest job of the 21st century – which is a pretty bold claim, given that there are a lot of years left until the 22nd century. Years ago, it was called “statistics” or “predictive modeling.”

Whatever you call it, machine learning is method of using historical data to make predictions about the future. The machine learns from those historical examples to build a model that can then be used to make predictions about new data.

For example, credit card companies need to detect fraudulent transactions in real-time so that they can block them. Losing money to fraud is a big problem to card providers, and detecting fraud is an ideal machine learning problem. Credit card providers have a mountain of historical transactions, some of which were flagged as being fraudulent. Using machine learning, the historical transactions can be used to train a model. That model is basically a machine that looks at a transaction and judges how likely it is to be fraud.

Another common example in the healthcare space is predicting patient outcomes. Suppose a patient goes to the ER and ends up getting an infection while they’re in the hospital. That’s a bad outcome for the patient (obviously), but also for the hospital and the insurance companies and so on. It’s in everyone’s interest to try to prevent these kinds of incidents.

Healthcare providers frequently use past patient data (including information on patients that both did and did not have a bad outcome) in order to build models that can predict whether or not a particular patient is likely to have a bad outcome in the future.

Machine learning models are very narrowly defined. They predict an event or a number. Is the patient going to get sicker? How much pipeline will my sales team generate next quarter? Will this potential customer respond to my marketing message? The models are designed to answer a very specific question by making a very specific prediction, and in turn become important inputs into AI solutions.

Artificial intelligence combines data, business logic, and predictions

Having a machine learning model is like having a superpower or a crystal ball. I can feed it data and it will make predictions about the future. These models can identify potentially bad loans before they default. They can forecast revenue out into the future. They can highlight places where crimes are likely to occur. The AI system is how you put them to practical use.

Let’s go back to the credit card fraud example. Suppose I could tell you by means of a machine learning model whether or not a transaction was likely to be fraudulent. What would you do? Even thinking about it for a minute makes it obvious that there’s a lot more work to do before you can start getting value out of that model.

Here are some questions that you need to consider in this example:

  1. What data is available to me at the time of the transaction?
  2. How much time do I have in order to process the data and reject the transaction?
  3. What regulations restrict my ability to block potentially fraudulent transactions?>
  4. Nobody likes having legitimate transactions blocked. What customer experience concerns do I need to address?
  5. What false positive rates and false negative rates am I comfortable with?
  6. …and so on

There are many more questions that a credit card provider would need to consider before implementing a system to block potentially fraudulent transactions.

That system, though, is what I call AI. It’s the combination of all the business logic, all the data, and all the predictions that I need in order to automate a decision or a process.

  • Business Logic: Business logic is probably the most important aspect of implementing an AI system. It encompasses the user experience, the legal compliance issues, the various thresholds and flags that I may need, and so on. It’s basically the glue that holds together the whole process
  • Data: AI systems reach out for data. They might need to aggregate customer data, summarize transactions, collect a measurement from a sensor, and so on. Regardless of where it comes from, data drives the AI system; without it, the system comes screeching to a halt.
  • Predictions: Not every AI system uses data, but all of the good ones do. Anyone that has ever called their cable provider has dealt with the endless automated phone system. They’re trying to automate a process, but they’re not being smart about it. It’s dumb AI. Smart AI might make predictions about why I was calling and attempt to route me to the right place, for instance. Predictions are the technology that makes AI truly smart.

Source: Datarobot

The Business of Artificial Intelligence

The Business of Artificial Intelligence

For more than 250 years the fundamental drivers of economic growth have been technological innovations. The most important of these are what economists call general-purpose technologies — a category that includes the steam engine, electricity, and the internal combustion engine. Each one catalysed waves of complementary innovations and opportunities. The internal combustion engine, for example, gave rise to cars, trucks, airplanes, chain saws, and lawnmowers, along with big-box retailers, shopping centres, cross-docking warehouses, new supply chains, and, when you think about it, suburbs. Companies as diverse as Walmart, UPS, and Uber found ways to leverage the technology to create profitable new business models.

For more than 250 years the fundamental drivers of economic growth have been technological innovations. The most important of these are what economists call general-purpose technologies — a category that includes the steam engine, electricity, and the internal combustion engine. Each one catalysed waves of complementary innovations and opportunities. The internal combustion engine, for example, gave rise to cars, trucks, airplanes, chain saws, and lawnmowers, along with big-box retailers, shopping centres, cross-docking warehouses, new supply chains, and, when you think about it, suburbs. Companies as diverse as Walmart, UPS, and Uber found ways to leverage the technology to create profitable new business models.

The most important general-purpose technology of our era is artificial intelligence, particularly machine learning (ML) — that is, the machine’s ability to keep improving its performance without humans having to explain exactly how to accomplish all the tasks it’s given. Within just the past few years machine learning has become far more effective and widely available. We can now build systems that learn how to perform tasks on their own.

Why is this such a big deal? Two reasons. First, we humans know more than we can tell: We can’t explain exactly how we’re able to do a lot of things — from recognizing a face to making a smart move in the ancient Asian strategy game of Go. Prior to ML, this inability to articulate our own knowledge meant that we couldn’t automate many tasks. Now we can.

Second, ML systems are often excellent learners. They can achieve superhuman performance in a wide range of activities, including detecting fraud and diagnosing disease. Excellent digital learners are being deployed across the economy, and their impact will be profound.

In the sphere of business, AI is poised have a transformational impact, on the scale of earlier general-purpose technologies. Although it is already in use in thousands of companies around the world, most big opportunities have not yet been tapped. The effects of AI will be magnified in the coming decade, as manufacturing, retailing, transportation, finance, health care, law, advertising, insurance, entertainment, education, and virtually every other industry transform their core processes and business models to take advantage of machine learning. The bottleneck now is in management, implementation, and business imagination.

Like so many other new technologies, however, AI has generated lots of unrealistic expectations. We see business plans liberally sprinkled with references to machine learning, neural nets, and other forms of the technology, with little connection to its real capabilities. Simply calling a dating site “AI-powered,” for example, doesn’t make it any more effective, but it might help with fundraising. This article will cut through the noise to describe the real potential of AI, its practical implications, and the barriers to its adoption.

 

WHAT CAN AI DO TODAY?

The term artificial intelligence was coined in 1955 by John McCarthy, a math professor at Dartmouth who organized the seminal conference on the topic the following year. Ever since, perhaps in part because of its evocative name, the field has given rise to more than its share of fantastic claims and promises. In 1957 the economist Herbert Simon predicted that computers would beat humans at chess within 10 years. (It took 40.) In 1967 the cognitive scientist Marvin Minsky said, “Within a generation the problem of creating ‘artificial intelligence’ will be substantially solved.” Simon and Minsky were both intellectual giants, but they erred badly. Thus it’s understandable that dramatic claims about future breakthroughs meet with a certain amount of scepticism.

Let’s start by exploring what AI is already doing and how quickly it is improving. The biggest advances have been in two broad areas: perception and cognition. In the former category some of the most practical advances have been made in relation to speech. Voice recognition is still far from perfect, but millions of people are now using it — think Siri, Alexa, and Google Assistant. The text you are now reading was originally dictated to a computer and transcribed with sufficient accuracy to make it faster than typing. A study by the Stanford computer scientist James Landay and colleagues found that speech recognition is now about three times as fast, on average, as typing on a cell phone. The error rate, once 8.5%, has dropped to 4.9%. What’s striking is that this substantial improvement has come not over the past 10 years but just since the summer of 2016.

Image recognition, too, has improved dramatically. You may have noticed that Facebook and other apps now recognize many of your friends’ faces in posted photos and prompt you to tag them with their names. An app running on your smartphone will recognize virtually any bird in the wild. Image recognition is even replacing ID cards at corporate headquarters. Vision systems, such as those used in self-driving cars, formerly made a mistake when identifying a pedestrian as often as once in 30 frames (the cameras in these systems record about 30 frames a second); now they err less often than once in 30 million frames. The error rate for recognizing images from a large database called ImageNet, with several million photographs of common, obscure, or downright weird images, fell from higher than 30% in 2010 to about 4% in 2016 for the best systems. (See the exhibit “Puppy or Muffin?”)

The speed of improvement has accelerated rapidly in recent years as a new approach, based on very large or “deep” neural nets, was adopted. The ML approach for vision systems is still far from flawless — but even people have trouble quickly recognizing puppies’ faces or, more embarrassingly, see their cute faces where none exist.

The second type of major improvement has been in cognition and problem solving. Machines have already beaten the finest (human) players of poker and Go — achievements that experts had predicted would take at least another decade. Google’s DeepMind team has used ML systems to improve the cooling efficiency at data centres by more than 15%, even after they were optimized by human experts. Intelligent agents are being used by the cybersecurity company Deep Instinct to detect malware, and by PayPal to prevent money laundering. A system using IBM technology automates the claims process at an insurance company in Singapore, and a system from Lumidatum, a data science platform firm, offers timely advice to improve customer support. Dozens of companies are using ML to decide which trades to execute on Wall Street, and more and more credit decisions are made with its help. Amazon employs ML to optimize inventory and improve product recommendations to customers. Infinite Analytics developed one ML system to predict whether a user would click on a particular ad, improving online ad placement for a global consumer packaged goods company, and another to improve customers’ search and discovery process at a Brazilian online retailer. The first system increased advertising ROI threefold, and the second resulted in a $125 million increase in annual revenue.

UNDERSTANDING MACHINE LEARNING

The most important thing to understand about ML is that it represents a fundamentally different approach to creating software: The machine learns from examples, rather than being explicitly programmed for a particular outcome. This is an important break from previous practice. For most of the past 50 years, advances in information technology and its applications have focused on codifying existing knowledge and procedures and embedding them in machines. Indeed, the term “coding” denotes the painstaking process of transferring knowledge from developers’ heads into a form that machines can understand and execute. This approach has a fundamental weakness: Much of the knowledge we all have is tacit, meaning that we can’t fully explain it. It’s nearly impossible for us to write down instructions that would enable another person to learn how to ride a bike or to recognize a friend’s face.

In other words, we all know more than we can tell. This fact is so important that it has a name: Polanyi’s Paradox, for the philosopher and polymath Michael Polanyi, who described it in 1964. Polanyi’s Paradox not only limits what we can tell one another but has historically placed a fundamental restriction on our ability to endow machines with intelligence. For a long time that limited the activities that machines could productively perform in the economy.

Machine learning is overcoming those limits. In this second wave of the second machine age, machines built by humans are learning from examples and using structured feedback to solve on their own problems such as Polanyi’s classic one of recognizing a face.

DIFFERENT FLAVORS OF MACHINE LEARNING

Artificial intelligence and machine learning come in many flavours, but most of the successes in recent years have been in one category: supervised learning systems, in which the machine is given lots of examples of the correct answer to a particular problem. This process almost always involves mapping from a set of inputs, X, to a set of outputs, Y. For instance, the inputs might be pictures of various animals, and the correct outputs might be labels for those animals: dog, cat, horse. The inputs could also be waveforms from a sound recording and the outputs could be words: “yes,” “no,” “hello,” “good-bye.” (See the exhibit “Supervised Learning Systems.”)

Successful systems often use a training set of data with thousands or even millions of examples, each of which has been labelled with the correct answer. The system can then be let loose to look at new examples. If the training has gone well, the system will predict answers with a high rate of accuracy.

The algorithms that have driven much of this success depend on an approach called deep learning, which uses neural networks. Deep learning algorithms have a significant advantage over earlier generations of ML algorithms: They can make better use of much larger data sets. The old systems would improve as the number of examples in the training data grew, but only up to a point, after which additional data didn’t lead to better predictions. According to Andrew Ng, one of the giants of the field, deep neural nets don’t seem to level off in this way: More data leads to better and better predictions. Some very large systems are trained by using 36 million examples or more. Of course, working with extremely large data sets requires more and more processing power, which is one reason the very big systems are often run on supercomputers or specialized computer architectures.

Any situation in which you have a lot of data on behaviour and are trying to predict an outcome is a potential application for supervised learning systems. Jeff Wilke, who leads Amazon’s consumer business, says that supervised learning systems have largely replaced the memory-based filtering algorithms that were used to make personalized recommendations to customers. In other cases, classic algorithms for setting inventory levels and optimizing supply chains have been replaced by more efficient and robust systems based on machine learning. JPMorgan Chase introduced a system for reviewing commercial loan contracts; work that used to take loan officers 360,000 hours can now be done in a few seconds. And supervised learning systems are now being used to diagnose skin cancer. These are just a few examples.

It’s comparatively straightforward to label a body of data and use it to train a supervised learner; that’s why supervised ML systems are more common than unsupervised ones, at least for now. Unsupervised learning systems seek to learn on their own. We humans are excellent unsupervised learners: We pick up most of our knowledge of the world (such as how to recognize a tree) with little or no labelled data. But it is exceedingly difficult to develop a successful machine learning system that works this way.

If and when we learn to build robust unsupervised learners, exciting possibilities will open up. These machines could look at complex problems in fresh ways to help us discover patterns — in the spread of diseases, in price moves across securities in a market, in customers’ purchase behaviours, and so on — that we are currently unaware of. Such possibilities lead Yann LeCun, the head of AI research at Facebook and a professor at NYU, to compare supervised learning systems to the frosting on the cake and unsupervised learning to the cake itself.

 

Another small but growing area within the field is reinforcement learning. This approach is embedded in systems that have mastered Atari video games and board games like Go. It is also helping to optimize data centre power usage and to develop trading strategies for the stock market. Robots created by Kindred use machine learning to identify and sort objects they’ve never encountered before, speeding up the “pick and place” process in distribution centres for consumer goods. In reinforcement learning systems the programmer specifies the current state of the system and the goal, lists allowable actions, and describes the elements of the environment that constrain the outcomes for each of those actions. Using the allowable actions, the system has to figure out how to get as close to the goal as possible. These systems work well when humans can specify the goal but not necessarily how to get there. For instance, Microsoft used reinforcement learning to select headlines for MSN.com news stories by “rewarding” the system with a higher score when more visitors clicked on the link. The system tried to maximize its score on the basis of the rules its designers gave it. Of course, this means that a reinforcement learning system will optimize for the goal you explicitly reward, not necessarily the goal you really care about (such as lifetime customer value), so specifying the goal correctly and clearly is critical.

PUTTING MACHINE LEARNING TO WORK

There are three pieces of good news for organizations looking to put ML to use today. First, AI skills are spreading quickly. The world still has not nearly enough data scientists and machine learning experts, but the demand for them is being met by online educational resources as well as by universities. The best of these, including Udacity, Coursera, and fast.ai, do much more than teach introductory concepts; they can actually get smart, motivated students to the point of being able to create industrial-grade ML deployments. In addition to training their own people, interested companies can use online talent platforms such as Upwork, Topcoder, and Kaggle to find ML experts with verifiable expertise.

The second welcome development is that the necessary algorithms and hardware for modern AI can be bought or rented as needed. Google, Amazon, Microsoft, Salesforce, and other companies are making powerful ML infrastructure available via the cloud. The cutthroat competition among these rivals means that companies that want to experiment with or deploy ML will see more and more capabilities available at ever-lower prices over time.

The final piece of good news, and probably the most underappreciated, is that you may not need all that much data to start making productive use of ML. The performance of most machine learning systems improves as they’re given more data to work with, so it seems logical to conclude that the company with the most data will win. That might be the case if “win” means “dominate the global market for a single application such as ad targeting or speech recognition.” But if success is defined instead as significantly improving performance, then sufficient data is often surprisingly easy to obtain.

For example, Udacity cofounder Sebastian Thrun noticed that some of his salespeople were much more effective than others when replying to inbound queries in a chat room. Thrun and his graduate student Zayd Enam realized that their chat room logs were essentially a set of labelled training data — exactly what a supervised learning system needs. Interactions that led to a sale were labelled successes, and all others were labelled failures. Zayd used the data to predict what answers successful salespeople were likely to give in response to certain very common inquiries and then shared those predictions with the other salespeople to nudge them toward better performance. After 1,000 training cycles, the salespeople had increased their effectiveness by 54% and were able to serve twice as many customers at a time.

The AI startup WorkFusion takes a similar approach. It works with companies to bring higher levels of automation to back-office processes such as paying international invoices and settling large trades between financial institutions. The reason these processes haven’t been automated yet is that they’re complicated; relevant information isn’t always presented the same way every time (“How do we know what currency they’re talking about?”), and some interpretation and judgment are necessary. WorkFusion’s software watches in the background as people do their work and uses their actions as training data for the cognitive task of classification (“This invoice is in dollars. This one is in yen. This one is in euros…”). Once the system is confident enough in its classifications, it takes over the process.

Machine learning is driving changes at three levels: tasks and occupations, business processes, and business models. An example of task-and-occupation redesign is the use of machine vision systems to identify potential cancer cells — freeing up radiologists to focus on truly critical cases, to communicate with patients, and to coordinate with other physicians. An example of process redesign is the reinvention of the workflow and layout of Amazon fulfilment centres after the introduction of robots and optimization algorithms based on machine learning. Similarly, business models need to be rethought to take advantage of ML systems that can intelligently recommend music or movies in a personalized way. Instead of selling songs à la carte on the basis of consumer choices, a better model might offer a subscription to a personalized station that predicted and played music a particular customer would like, even if the person had never heard it before.

Note that machine learning systems hardly ever replace the entire job, process, or business model. Most often they complement human activities, which can make their work ever more valuable. The most effective rule for the new division of labour is rarely, if ever, “give all tasks to the machine.” Instead, if the successful completion of a process requires 10 steps, one or two of them may become automated while the rest become more valuable for humans to do. For instance, the chat room sales support system at Udacity didn’t try to build a bot that could take over all the conversations; rather, it advised human salespeople about how to improve their performance. The humans remained in charge but became vastly more effective and efficient. This approach is usually much more feasible than trying to design machines that can do everything humans can do. It often leads to better, more satisfying work for the people involved and ultimately to a better outcome for customers.

Designing and implementing new combinations of technologies, human skills, and capital assets to meet customers’ needs requires large-scale creativity and planning. It is a task that machines are not very good at. That makes being an entrepreneur or a business manager one of society’s most rewarding jobs in the age of ML.

RISKS AND LIMITS

The second wave of the second machine age brings with it new risks. In particular, machine learning systems often have low “interpretability,” meaning that humans have difficulty figuring out how the systems reached their decisions. Deep neural networks may have hundreds of millions of connections, each of which contributes a small amount to the ultimate decision. As a result, these systems’ predictions tend to resist simple, clear explanation. Unlike humans, machines are not (yet!) good storytellers. They can’t always give a rationale for why a particular applicant was accepted or rejected for a job, or a particular medicine was recommended. Ironically, even as we have begun to overcome Polanyi’s Paradox, we’re facing a kind of reverse version: Machines know more than they can tell us.

This creates three risks. First, the machines may have hidden biases, derived not from any intent of the designer but from the data provided to train the system. For instance, if a system learns which job applicants to accept for an interview by using a data set of decisions made by human recruiters in the past, it may inadvertently learn to perpetuate their racial, gender, ethnic, or other biases. Moreover, these biases may not appear as an explicit rule but, rather, be embedded in subtle interactions among the thousands of factors considered.

A second risk is that, unlike traditional systems built on explicit logic rules, neural network systems deal with statistical truths rather than literal truths. That can make it difficult, if not impossible, to prove with complete certainty that the system will work in all cases — especially in situations that weren’t represented in the training data. Lack of verifiability can be a concern in mission-critical applications, such as controlling a nuclear power plant, or when life-or-death decisions are involved.

Third, when the ML system does make errors, as it almost inevitably will, diagnosing and correcting exactly what’s going wrong can be difficult. The underlying structure that led to the solution can be unimaginably complex, and the solution may be far from optimal if the conditions under which the system was trained change.

While all these risks are very real, the appropriate benchmark is not perfection but the best available alternative. After all, we humans, too, have biases, make mistakes, and have trouble explaining truthfully how we arrived at a particular decision. The advantage of machine-based systems is that they can be improved over time and will give consistent answers when presented with the same data.

Does that mean there is no limit to what artificial intelligence and machine learning can do? Perception and cognition cover a great deal of territory — from driving a car to forecasting sales to deciding whom to hire or promote. We believe the chances are excellent that AI will soon reach superhuman levels of performance in most or all of these areas. So what won’t AI and ML be able to do?

We sometimes hear “Artificial intelligence will never be good at assessing emotional, crafty, sly, inconsistent human beings — it’s too rigid and impersonal for that.” We don’t agree. ML systems like those at Affectiva are already at or beyond human-level performance in discerning a person’s emotional state on the basis of tone of voice or facial expression. Other systems can infer when even the world’s best poker players are bluffing well enough to beat them at the amazingly complex game Heads-up No-Limit Texas Hold’em. Reading people accurately is subtle work, but it’s not magic. It requires perception and cognition — exactly the areas in which ML is currently strong and getting stronger all the time.

A great place to start a discussion of the limits of AI is with Pablo Picasso’s observation about computers: “But they are useless. They can only give you answers.” They’re actually far from useless, as ML’s recent triumphs show, but Picasso’s observation still provides insight. Computers are devices for answering questions, not for posing them. That means entrepreneurs, innovators, scientists, creators, and other kinds of people who figure out what problem or opportunity to tackle next, or what new territory to explore, will continue to be essential.

WHILE ALL THE RISKS OF AI ARE VERY REAL, THE APPROPRIATE BENCHMARK IS NOT PERFECTION BUT THE BEST AVAILABLE ALTERNATIVE.


Similarly, there’s a huge difference between passively assessing someone’s mental state or morale and actively working to change it. ML systems are getting quite good at the former but remain well behind us at the latter. We humans are a deeply social species; other humans, not machines, are best at tapping into social drives such as compassion, pride, solidarity, and shame in order to persuade, motivate, and inspire. In 2014 the TED Conference and the XPrize Foundation announced an award for “the first artificial intelligence to come to this stage and give a TED Talk compelling enough to win a standing ovation from the audience.” We doubt the award will be claimed anytime soon.

We think the biggest and most important opportunities for human smarts in this new age of super powerful ML lie at the intersection of two areas: figuring out what problems to work on next, and persuading a lot of people to tackle them and go along with the solutions. This is a decent definition of leadership, which is becoming much more important in the second machine age.

The status quo of dividing up work between minds and machines is falling apart very quickly. Companies that stick with it are going to find themselves at an ever-greater competitive disadvantage compared with rivals who are willing and able to put ML to use in all the places where it is appropriate and who can figure out how to effectively integrate its capabilities with humanity’s.

A time of tectonic change in the business world has begun, brought on by technological progress. As was the case with steam power and electricity, it’s not access to the new technologies themselves, or even to the best technologists, that separates winners from losers. Instead, it’s innovators who are open-minded enough to see past the status quo and envision very different approaches, and savvy enough to put them into place. One of machine learning’s greatest legacies may well be the creation of a new generation of business leaders.

In our view, artificial intelligence, especially machine learning, is the most important general-purpose technology of our era. The impact of these innovations on business and the economy will be reflected not only in their direct contributions but also in their ability to enable and inspire complementary innovations. New products and processes are being made possible by better vision systems, speech recognition, intelligent problem solving, and many other capabilities that machine learning delivers.

Some experts have gone even further. Gil Pratt, who now heads the Toyota Research Institute, has compared the current wave of AI technology to the Cambrian explosion 500 million years ago that birthed a tremendous variety of new life forms. Then as now, one of the key new capabilities was vision. When animals first gained this capability, it allowed them to explore the environment far more effectively; that catalyzed an enormous increase in the number of species, both predators and prey, and in the range of ecological niches that were filled. Today as well we expect to see a variety of new products, services, processes, and organizational forms and also numerous extinctions. There will certainly be some weird failures along with unexpected successes.

Although it is hard to predict exactly which companies will dominate in the new environment, a general principle is clear: The most nimble and adaptable companies and executives will thrive. Organizations that can rapidly sense and respond to opportunities will seize the advantage in the AI-enabled landscape. So the successful strategy is to be willing to experiment and learn quickly. If managers aren’t ramping up experiments in the area of machine learning, they aren’t doing their job. Over the next decade, AI won’t replace managers, but managers who use AI will replace those who don’t.

AI in 8 minutes

AI in 8 minutes

Knowing a little about everything is often better than having one expert skill. This is particularly true for people entering the debate in emerging markets. Most notably, tech.

 

Most folks think they know a little about AI. But the field is so new and growing so fast that the current experts are breaking new ground daily. There is so much science to uncover that technologists and policymakers from other areas can contribute rapidly in the field of AI.

Knowing a little about everything is often better than having one expert skill. This is particularly true for people entering the debate in emerging markets. Most notably, tech.

 

Most folks think they know a little about AI. But the field is so new and growing so fast that the current experts are breaking new ground daily. There is so much science to uncover that technologists and policymakers from other areas can contribute rapidly in the field of AI.

 

That’s where this article comes in. My aim was to create a short reference which will bring technically minded people up to speed quickly with AI terms, language and techniques. Hopefully, this text can be understood by most non-practitioners whilst serving as a reference to everybody.

 

Introduction

Artificial intelligence (AI), deep learning, and neural networks are terms used to describe powerful machine learning-based techniques which can solve many real-world problems.

 

While deductive reasoning, inference, and decision-making comparable to the human brain is a little way off, there have been many recent advances in AI techniques and associated algorithms. Particularly with the increasing availability of large data sets from which AI can learn.

 

The field of AI draws on many fields including mathematics, statistics, probability theory, physics, signal processing, machine learning, computer science, psychology, linguistics, and neuroscience. Issues surrounding the social responsibility and ethics of AI draw parallels with many branches of philosophy.

 

The motivation for advancing AI techniques further is that the solutions required to solve problems with many variables are incredibly complicated, difficult to understand and not easy to put together manually.

 

Increasingly, corporations, researchers and individuals are relying on machine learning to solve problems without requiring comprehensive programming instructions. This black box approach to problem-solving is critical. Human programmers are finding it increasingly complex and time-consuming to write algorithms required to model and solve data heavy problems. Even when we do construct a useful routine to process big data sets, it tends to be extremely complex, difficult to maintain and impossible to test adequately.

 

Modern machine learning and AI algorithms, along with properly considered and prepared training data, are able to do the programming for us.

 

 

Overview

Intelligence: the ability to perceive information, and retain it as knowledge to be applied towards adaptive behaviors within an environment or context.

 

This Wikipedia definition of intelligence can apply to both organic brains and machines. Intelligence does not imply consciousness, a common misconception proliferated by science fiction writers.

 

Search for AI examples on the internet and you’ll see references to IBM’s Watson. A machine learning algorithm which was made famous by winning the TV quiz show Jeopardy in 2011. It has since been repurposed and used as a template for a diverse range of commercial applications. Apple, Amazon and Google are working hard to get a similar system in our homes and pockets.

 

Natural language processing and speech recognition were the first commercial applications of machine learning. Followed closely by other automated recognition tasks (pattern, text, audio, image, video, facial, …). The range of applications is exploding and includes autonomous vehicles, medical diagnoses, gaming, search engines, spam filtering, crime fighting, marketing, robotics, remote sensing, computer vision, transportation, music recognition, classification…

 

AI has become so embedded in the technology that we use, it is now not seen by many as ‘AI’ but just an extension of computing. Ask somebody on the street if they have AI on their phone and they will probably say no. But AI algorithms are embedded everywhere from predictive text to the autofocus system in the camera. The general view is that AI has yet to arrive. But it is here now and has been for some time.

 

AI is a fairly generalised term. The focus of most research is the slightly narrower field of artificial neural networks and deep learning.

 

How your brain works

The human brain is an exquisite carbon computer estimated to perform a billion billion calculations per second (1000 petaflops), while consuming around 20 Watts of power. The Chinese supercomputer, Tianhe-2 (as the time of writing the fastest in the world) manages only 33,860 trillion calculations per second (33.86 petaflops) and consumes 17600000 watts (17.6 megawatts). We have some way to go before our silicon creations catch up to evolutions carbon ones.

 

The precise mechanism that the brain uses to perform its thinking is up for debate and further study (I like the theory that the brain harnesses quantum effects, but that’s another article). However, the inner workings are often modelled around the concept of neurons and their networks. The brain is thought to contain around 100 billion neurons.

 

 

Neurons interact and communicate along pathways allowing messages to be passed around. The signals from individual neurons are weighted and combined before activating other neurons. This process of messages being passed around, combining and activating other neurons is repeated across layers. Across the 100 billion neurons in the human brain, the summation of this weighted combination of signals is complex. And that is a considerable understatement.

 

But it’s not that simple. Each neuron applies a function, or transformation, to its weighted inputs before testing if an activation threshold has been reached. This combination of factors can be linear or non-linear.

 

The initial input signals originate from a variety of sources… our senses, internal monitoring of bodily functions (blood oxygen level, stomach contents…). A single neuron may receive hundreds of thousands of input signals before deciding how to react.

 

Thinking or processing and the resultant instructions given to our muscles are the summations of input signals and feedback loops across many layers and cycles of the neural network. But the brain’s neural networks also change and update, including modifications to the amount of weighting applied between neurons. This is caused by learning and experience.

 

This model of the human brain has been used as a template to help replicate the brain’s capabilities inside a computer simulation… an artificial neural network.

 

Artificial Neural Networks (ANNs)

Artificial Neural Networks are mathematical models inspired by and modelled on biological neural networks. ANNs are able to model and process non-linear relationships between inputs and outputs. Adaptive weights between the artificial neurons are tuned by a learning algorithm that reads observed data with the goal of improving the output.

 

 

Optimization techniques are used to make the ANN solution to be as close as possible to the optimal solution. If the optimisation is successful the ANN is able to solve the particular problem with high performance.

 

An ANN is modelled using layers of neurons. The structure of these layers is known as the model’s architecture. Neurons are individual computational units able to receive inputs and apply a mathematical function to determine if messages are passed along.

 

In a simple three-layer model, the first layer is the input layer, followed by one hidden layer and an output layer. Each layer can contain one or more neurons.

 

As models become increasingly complex, with more layers and more neurons, their problem-solving capabilities increase. If the model is too large for the given problem, however, then the model cannot be optimised efficiently. This is known as overfitting.

 

The fundamental model architecture and tuning are the major elements of ANN techniques, along with the learning algorithms to read in the data. All the components bear the performance of the model.

 

Models tend to be characterized by an activation function. This is used to convert a neuron’s weighted input to its output activation. There is a selection of transformations that can be used as the activation function.

 

ANNs can be extremely powerful. However even though the mathematics of a few neurons is simple, the entire network scales up to become complex. Because of this ANNs are considered ‘black box’ algorithms. Choosing ANN as a tool to solve a problem should be done with care as it is not possible to unpick the system’s decision making process later.

 

Deep Learning

Deep learning is a term used to describe neural networks and related algorithms that consume raw data. The data is processed through the layers of the model to calculate a target output.

 

Unsupervised learning is where deep learning techniques excel. A properly configured ANN is able to automatically identify features in the input data important to achieve the desired output. Traditionally the burden of making sense of the input data usually falls to the programmer building the system. However, in deep learning setup, the model itself can identify how to interpret the data to achieve meaningful results. Once an optimised system has been trained the computational, memory and power requirements of the model is much reduced.

 

Put simply, feature learning algorithms allow a machine to learn for a specific task using well-suited data… the algorithms learn how to learn.

 

Deep learning has been applied to a wide variety of tasks and is considered one of the innovative AI techniques. There are well designed algorithms suitable for supervised, unsupervised and semi-supervised learning problems.

 

Shadow learning is a term used to describe a simpler form of deep learning where feature selection of the data requires upfront processing and more in-depth knowledge by the programmer. The resultant models can be more transparent and higher performance at the expense of increased time at the design stage.

 

Summary

AI is a powerful field of data processing and can yield complex results more quickly than traditional algorithm development by programmers. ANNs and deep learning techniques can solve a diverse set of difficult problems. The downside is that the optimised models created are black-box and impossible to unpick by their human creators. This can lead to ethical problems which data transparency is important.

 

Source: Medium

Will Cloud replace traditional IT infastructure?

Will Cloud replace traditional IT infastructure?

As cloud infrastructure offerings gain more popularity, the debate on the raison d'etre of on-premise IT infrastructure has grown. Obviously, there are two sides of the debate. While one group foresees on-premise IT infrastructure fading into oblivion, the other group believes – challenges notwithstanding – traditional IT infrastructure will remain relevant.

As cloud infrastructure offerings gain more popularity, the debate on the raison d'etre of on-premise IT infrastructure has grown. Obviously, there are two sides of the debate. While one group foresees on-premise IT infrastructure fading into oblivion, the other group believes – challenges notwithstanding – traditional IT infrastructure will remain relevant.

 

Data corroborates the fact that cloud infrastructure has been becoming more popular with increasing adoption. The popularity can be partly attributed to the problems with traditional enterprise infrastructure such as cost and management problems. However, it does not seem realistic that all enterprise infrastructure will move to the cloud. Organizations will likely carry out due diligence and evaluate the proposition on a case-by-case basis. (To learn more about how the cloud is changing business, check out Project Management, Cloud Computing Style.)

The Hype Around the Cloud

There certainly appears to be some hype around cloud, especially on its potential to replace the traditional IT infrastructure. There was recently a debate on this topic sponsored by Deloitte. Obviously, there are two sides of the debate. While one side appeared bullish on the potential replacement of traditional IT infrastructure, the other side took a more balanced view. Let us consider both views:

For Cloud Replacing Traditional IT Infrastructure

This side of the debate focused on eliminating the cost and hassles associated with enterprise architecture (EA). Maintaining the EA involved many different activities which are viewed as complex, costly and avoidable. There is an opportunity to move everything related to EA to the cloud and reduce hassles and costs significantly. (For more on infrastructure, see IT Infrastructure: How to Keep Up.)

Against Cloud Replacing Traditional IT Infrastructure

Jobs and processes in the cloud cannot be treated as standalone entities. EA will still have a role to play in managing the relationships and dependencies between mission, technologies, processes and business initiatives. Scott Rosenberger, partner at Deloitte Consulting, takes a more balanced view. According to Rosenberger, "No matter what tool you use, the core problem isn't the technology. It's in defining the relationships between all the different components of their vision, from business processes to technology. And that's where EA comes in."

According to David S. Linthicum, noted author,

Cloud computing does not replace enterprise architecture. It does not provide "infinite scalability," it does not "cost pennies a day," you can't "get there in an hour" – it won't iron my shirts either. It's exciting technology that holds the promise of providing more effective, efficient, and elastic computing platforms, but we're taking this hype to silly levels these days, and my core concern is that the cloud may not be able to meet these overblown expectations.

Problems of Traditional IT Infrastructure

Both exasperation with EA limitations and cost considerations have been behind the serious consideration of the cloud infrastructure proposition. Whether we are choosing something even worse is a different debate. EA is a practice which, if implemented well, could yield many benefits. However, it is unable to realize its potential because of certain problems:

  • EA is a separate practice and requires a practice-based management. Yet, organizations put people in charge of EA who are people-focused and not practice-focused.
  • Implementing quality EA requires a deep and broad understanding of EA and its role in the organization. For that, a broader planning and architecture is required, right from the start. However, many different ad hoc architectures are created based on situations, and that can completely jeopardize the broader EA goals.
  • The main problem with many EA architects is their approach to businessproblems. While the technical acumen of the architects cannot be questioned, they often lack the ability to take a broader view of the business problems and how the EA can solve them. The architects are too deep into the technical nuances, which prevents them from accepting other business perspectives.
  • Many EAs are too complex and rigid. This prevents them from accommodating changes necessitated by changes in business situations. Many head architects tend to forget that the main focus of EA is on business and not on unnecessary technical stuff. According to John Zachman, the founder of modern EA, "Architecture enables you to accommodate complexity and change. If you don't have Enterprise Architecture, your enterprise is not going to be viable in an increasingly complex and changing external environment."

Is Cloud the Solution?

The way forward is to have a balance and not drastically change your IT infrastructure strategy. You also need to seriously consider the issue of confidentiality and security of data. Probably the best approach would be to consider the feasibility of moving EA to the cloud in phases. For example, you could divide your EA into logical areas such as software applications and servers and consider their cases individually. For example, the following categories could be used:

  • Software applications, which can include productivity suites like OfficeSQL Server, Exchange email, VMware ESX ServerSharePoint, finance programs (like QuickBooks Server), or an enterprise search program.
  • Service areas, which can include functions such as authentication mechanisms, monitoring, and task schedulers. For example, you can certainly consider replacing complex in-house services such as Active Directory with online services such as Windows Azure Active Directory.
  • Storage can be a tricky proposition because you store a lot of data which can be confidential. So, you need to think hard about whether or not you want to move that data out and allow a third party to take care of it. For example, if your business handles credit card data, it is extremely risky to hand over storage to another entity.

Conclusion

The way forward should be a balance between cloud and in-house architecture. Not all organizations are going to move to the cloud because of their unique considerations. It is rather simplistic to think that all IT infrastructure will just move to the cloud; it is far more complex than that. Studies show that a lot of talk about moving to the cloud is just that – talk. Companies will decide on cloud adoption depending on their data security, cost and benefits, relevance and other considerations. Three scenarios are possible: total, mixed or non-adoption of cloud.

At the same time, it cannot be denied that cloud-based infrastructure is going to be a major force very soon. So much so that major IT infrastructure providers are expecting a slowdown. The Research firm 451 Group finds that cloud providers such as Amazon Web Services are going to grow at an exponential rate. But even in the face of growing cloud adoption, EA is not going to go away anytime soon.

Source: Techopedia

Aberdeen to Miami and back again without any money

Aberdeen to Miami and back again without any money

Alisanne Ennis travelled over 10,000 miles without a single penny to her name in the hopes of raising money for Marie Curie and travelling as far as she could without any money.   Alisanne works for Accenture who encourage their employees to take 3 days every year and dedicate them to helping in the community, upskilling people or supporting a charity.

Alisanne Ennis travelled over 10,000 miles without a single penny to her name in the hopes of raising money for Marie Curie and travelling as far as she could without any money.   Alisanne works for Accenture who encourage their employees to take 3 days every year and dedicate them to helping in the community, upskilling people or supporting a charity.

 

Alisanne set off on May 25th from Huntly in Aberdeenshire dressed in her Marie Curie yellow T-shirt and Marie McCoo (A Marie Curie bear) “Full of optimism and hope that whoever I met along the way would believe in me and donate to the cause.”  Alisanne added that she was not disappointed as she received donations from all sorts including passengers on her BA flight to London, a stranger at JFK Airport, the NYPD and many more. Alisanne received £4,200 in donations for Marie Curie, who provide care and support for people living with any terminal illness, and their families.

 

What was your motivation behind the trip?

I have witnessed the pain and suffering of not only the patients but their family when someone is diagnosed with a terminal illness. I just wanted to see if I could help in some way

 

The idea of travelling as far as you could without money, where did the idea come from?  

Last year I was heading to Chicago for a big conference.   When I arrived at the airport I realised I had left my purse at home. I had to take a leap of faith and get on the plane in the knowledge that some of my colleagues would help me out when I got to my final destination.  I managed to get to Chicago without any money, but my recent trip was a totally different ball game.

 

What were most people’s reaction when you told them what you were doing?

Mad, Brave, Inspirational.   Having travelled every week for the last 2 years to deliver a large global project in Switzerland,the last thing I should have been thinking about was getting on another plane… 

 

What was the wildest story that came from it?  

Being picked up by the police in NYC at Grand Central Station for playing my Ukulele and singing Irish Ballads.   I managed to make $10 in 10 mins so the police threw in $2 each so I could buy myself a Shake Shack burger and chips – heaven!

What was the toughest challenge you faced? 

Not having any money in Miami – not a great place to be with no money.  People there weren’t particularly friendly or helpful. I managed to find a hotel which gave me a free breakfast and a margarita every day, so I lived on breakfast bars and nuts!

 

What inspired you to tough out the worst parts?

I’m healthy and happy and not facing the pain and suffering that people with a terminal illness have to deal with every day

 

Would you do It again?

Round the world – relay style…   Never say never!

 

Alisanne added that she was humbled by the generosity and kindness shown by the majority of people she told her story to. Which lead to the money she raised paying for 210 hours of care for people in the community with terminal illnesses and their families.

“I wouldn’t have achieved my goal if I didn’t get the support and encouragement from friends, family, local and other businesses around the UK. Many thanks to Hanson Regan for supporting the cause and believing in me.”

 

If you would like to support the cause you can donate to Marie Curie directly, you can find their donation page here. To get involved you can search their charity events to find something in your local area.

Countering counterfeit drugs with Blockchain

Countering counterfeit drugs with Blockchain

In addition to posing a health risk to patients harmed by placebos or even harmful ingredients in the fake drugs, counterfeits add up to a major loss for the pharmaceutical industry to the tune of hundreds of billions a year. Aside from concerns about harm and loss, new legal requirements that demand traceability for drugs are kicking in.

The Effects of Counterfeit Drugs

In addition to posing a health risk to patients harmed by placebos or even harmful ingredients in the fake drugs, counterfeits add up to a major loss for the pharmaceutical industry to the tune of hundreds of billions a year. Aside from concerns about harm and loss, new legal requirements that demand traceability for drugs are kicking in.

Counterfeit drugs have been identified as a persistent global problem since 1985. The World Health Organization (WHO) estimates that around 10 percent of drugs found in low to middle income countries are counterfeit. That translates into the deaths of tens of thousands of people with diseases who took medication without the necessary active ingredient to treat their conditions. (To learn more about how tech is influencing the drug industry, see Big Data's Influence in Medicine and Pharmaceuticals.)

Current Conditions Favor Counterfeiting

According to Harvey Bale, Ph.D., of the Organization for Economic Co-operation and Development (OECD), counterfeits persist because of four conditions that persist:

  1. Fakes can be made relatively cheaply (at least as profitable as narcotics – lower risk).
  2. Many countries, especially in the developing world, lack adequate regulation and enforcement.
  3. Even in the industrialized countries, the risk of prosecution and penalties for counterfeiting are inadequate.
  4. The way in which medicines reach the consumer is also different from other goods: The end user has little knowledge of the product.

Limited Solutions Applied

As the problem is particularly rampant in West Africa, a Ghanaian entrepreneur named Bright Simons offered a verification solution through his company, mPedigree. A customer can be assured that the medication offered for sale is genuine if the code they find on the bottle checks out by calling a free number.

The Pedigree approach to spotting counterfeits works only on the final step of the drug supply chain, and it still puts authentication into one central source rather than offer the transparency of a public ledger, which is only possible with blockchain technology.

The Promise of Blockchain

IBM laid out some of the ways blockchain can improve the healthcare industry in Blockchain: The Chain of Trust and its Potential to Transform Healthcare – Our Point of View. The premise is that blockchain serves as “an Internet of Value” because what is in the blockchain record cannot be altered, and so can be relied upon as trustworthy.

Having that kind of authentication in place would assure consumers they are getting the benefits of the drugs they are prescribed and would benefit pharma companies in setting up a completely traceable supply chain.

Compliance Benefits

At the end of 2013, Obama passed the Drug Supply Chain Security Act (DSCSA), which calls for a a national track-and-trace system by which manufacturers must affix product identifiers to each package of product that is introduced into the supply chain. As companies were granted a period of ten years to get to the point of compliance with the new regulations, they have to gear up for a reliable solution to accurately track their supply chains by 2023. AI is also having a big influence on medicine.

Blockchain Features Secure Trust

Tapan Mehta, market development executive, healthcare and life sciences services practice, at DMI was quoted in Healthcare IT News, saying, “A blockchain-based system could ensure a chain-of-custody log, tracking each step of the supply chain at the individual drug or product level.”

“With blockchain, records are permanent and cannot be altered in any way, ensuring the most secure transfer of data possible,” Mehta explained, thanks to a ledger that is both decentralized and public. That’s what gives blockchain the dual distinction “transparency and traceability.”

Working off of that would not only make it possible to distinguish the real thing from the counterfeit but, “to trace every drug product all the way back to the origin of the raw material used to make it.”

Another advantage it offers is recovery. He explained, “In the event that a drug shipment is disrupted or goes missing, the data stored on the common ledger provides a rapid way for all parties to trace it,” to the last identified handler.

Building the Pharma Blockchain

The blockchain solution is not just a hypothetical idea. In 2017, Chronicled set up a joint venture to build and test a prototype system to function as an industry model under the name of the MediLedger Project. The project included representatives from major companies like Genentech, the Roche Group, Pfizer, AmerisourceBergen, and McKesson Corporation.

The MediLedger Project was built on a Parity Ethereum client, which worked to achieve the aims of tracking for DSCA, according the report on the project’s progressfor the year. The prototype demonstrated the possibility of a secure blockchain network capable of processing over 2,000 transactions per second.

The project showed that a blockchain system can validate “the authenticity of product identifiers (verification) as well as the provenance of sellable units back to the originating manufacturer.” In addition to countering counterfeits, that record at every step can be useful in “allowing for expedited suspect investigations and recalls.”

The project report also asserts that there are many “additional business applications to the pharmaceutical industry, allowing for compounding benefit for this industry once such a platform is established.” However, that substantial return on the blockchain investment will only be possible if there is “strong participation from all industry stakeholders (manufacturers, wholesalers, dispensers, service providers, etc.).”

Given that what is at stake is not just billions of dollars for the pharma industry, but the lives and health of millions of people who have been prescribed medication, all the involved parties should come together to solve the problem of counterfeit drugs. If the difficulties in accountability and identification for drug production could be remedied by blockchain, it should be universally implemented.

Source: Techopedia

The pros & cons of Intranet

The pros & cons of Intranet

Most companies incorporate an intranet into their business in some capacity. An intranet is a private computer network that operates within an organization and facilitates internal communication and information sharing with the same technology used by the internet. The major difference is that an intranet is confined to an organization, while the internet is a public network that operates between organizations.

With an effective intranet infrastructure, an organization can reap benefits across the board. In fact, an intranet can significantly improve efficiency and performance. Still, there are risks associated with setting up an intranet. Here, we'll discuss the pros and cons. 

Most companies incorporate an intranet into their business in some capacity. An intranet is a private computer network that operates within an organization and facilitates internal communication and information sharing with the same technology used by the internet. The major difference is that an intranet is confined to an organization, while the internet is a public network that operates between organizations.

With an effective intranet infrastructure, an organization can reap benefits across the board. In fact, an intranet can significantly improve efficiency and performance. Still, there are risks associated with setting up an intranet. Here, we'll discuss the pros and cons. 

 

Having access to all the resources you need to perform your job tasks is an essential aspect of productivity. If you have to constantly take time out to find required information or are unclear of recent changes to your responsibilities, then that will have a negative impact on your productivity. An intranet acts as a one-stop shop for all workers. It provides them with all the relevant announcements, tools and information to perform their jobs. Easy access is provided to workers by placing all the important information and tools on their individual desktops, which allows them to work smarter and faster.

Allows for Greater Collaboration

An intranet provides effective collaboration tools that are adaptable to a range of personal styles and communication methods. Every company has a diverse range of employees – each with different working styles and communication methods – and each individual has his or her own work style. Thus, collaboration between workers can be difficult.

An effective intranet solution provides separate areas for each department, allowing workers to collaborate and share relevant departmental information. An intranet also facilitates cross-department communication, which breaks down barriers and enables open communication between management and departmental levels. This functionality gives individuals opportunities to share potentially beneficial ideas and perspectives.

Provides a Social Networking Platform

Creating a social work environment is important because it creates stronger relationships between employees, leading to greater job satisfaction and productivity. Most intranet solutions utilize popular social media functionalities that allow staff to display their personalities on their intranet pages. Employees and management can share personal interests, hobbies and other aspects of their personal lives, providing a more personally interactive platform. Relationships forged through an intranet's social networking capabilities can positively impact staff job performance and collaboration. (For different corporate use of social networking, see CRM Meets Social Media.)

Simplifies Decision Making

Access to vital information is crucial to effective decision making. An intranet allows staff to share information and ideas.

Streamlines Data Management

Managing documents is key to any organization. With an intranet, you can easily upload and organize documents that can be accessed at any time. Employees can securely collaborate on projects and data. Document and information availability gives a company a transparent culture, which empowers staff.

Intranet Cons

Potential for Security Risks

Because you are providing open access to sensitive data, it is important to establish an effective security system via a gateway or firewall. Without appropriate security measures, your private data may be accessed by an unauthorized party – putting your company at risk.

Can Be Time Consuming and Costly

Despite the advantages of setting up an intranet, it can be a costly procedure, as dedicated teams must be assigned tasks to set up and configure the intranet for an organization. Additionally, an intranet is only effective when staff members fully understand how it should be used. It is equally important to ensure that staff know all the available intranet functionalities. This means resources must be used to train staff so they can adapt and continue performing their job duties. Without effective training, an intranet implementation can turn into a nightmare because it can impede staff's ability to perform their jobs – ultimately causing losses to the company.

Routine maintenance is a must to keep an intranet organized and functional. Posting regular content also is an important aspect of maintaining an intranet, as it ensures employees check their intranet regularly for new information. This can be a time-consuming process and requires dedication from the management team.

Can Be Counterproductive

An intranet can be an abundant and easily accessible resource for information. However, uploading excessive information in an unorganized manner can be counterproductive and create confusion between employees. Additionally, if information is not organized and cannot be easily navigated, productivity will be negatively impacted.

An effective intranet solution can have a profound impact on organizational productivity, collaboration and data management. Employees have the ability to interact and share information with ease, facilitating effective collaboration on projects – paving the way for increased productivity. With available desktop resources and tools, each employee can easily access everything they need to perform their job. All of these positive benefits stem from dedicating time and resources to setting up an effective intranet.

Source: techopedia

5 most in-demand tech jobs in 2018

5 most in-demand tech jobs in 2018

As technology becomes more and more integral within our everyday lives, it is only natural that the same can be seen in the workforce. Tech jobs have become the highest in demand jobs and this trend is increasing daily.

As technology becomes more and more integral within our everyday lives, it is only natural that the same can be seen in the workforce. Tech jobs have become the highest in demand jobs and this trend is increasing daily.

According to Cyberstates 2017, an annual analysis of the tech industry by technology association CompTIA, more than 7.3 million workers for, the tech-industry workforce as of 2017. This survey also looked at the unemployment rate in the tech industry and found that it Is far lower than the national average in the US.

These results hint at a positive trend in the tech industry which means if you’re looking for a well-paying job in a growing industry, a tech job might be your best options. With that in mind let’s take a look at the 5 most in-demand tech jobs of 2018.

Blockchain experts.

 

Blockchain was a hot buzzword of 2017, along with bitcoin and cryptocurrency. While cryptocurrency has been a known concept since the mid-2000’s, blockchain and bitcoins became the hit topic of conversation when they became a viable form of investment in late 2017.

With that in mind, 2018 is all abut Blockchain as it was estimated that the tech behind blockchain will be used by companies across all industries. That’s why Blockchain experts and analysts who understand the details of the blockchain systems will be in huge demand worldwide.

The exciting thing about Blockchain is the technology behind it can be deployed to sectors ranging from e-voting to the patent industry in the coming years.

Blockchain experts who have a background in computer science coupled with good analytical logical skills will be aware of how blockchain can be incorporates into different scenarios. A blockchain expert like this will be looking at an average base salary of £74,000.

Cloud Engineer

Cloud computing was one of the hottest trending topics of 2017 and the influence of Cloud has not deteriorated at all in 2018. In fact, more and more tech solutions are based on the principles of Cloud computing and the number is only expected to rise.

Pretty much every big application and popular software has their databases in the cloud. For this reason, there is a high demand for Cloud specialists and Cloud engineers. Their primary responsibility would involve designing, planning, managing, maintaining and supporting the varying software that run on Cloud based solutions.

Cloud engineers should be experiences with all major cloud solutions like Azure, AWS among others with popular coding frameworks like PHP, Node, Python etc. Average base salary for Cloud Engineers is roughly £82,000.

 

AI Engineers

Artificial Intelligence has grown substantially since its initial debut in the 70’s, however due to its popularity in the last decade or so it has been making leaps and strides as it has seen rapid innovation and constant development. For those interested in working as an AI engineer the demand outweighs the supply.

Due to the ever-evolving nature of AI, there’s a near constant need for AI engineers who can break new barriers and take us further than just self-driving cars.

AI engineers need a background in software engineering, the most sought after programming language would be Python followed by C#, C++ and other frameworks. A successful AI engineer would also need to have a curious mind and a problem solving aptitude.

AI engineers are looking at a base salary of around £88.000.

 

Mobile application Developer

There’s an app for everything from sharing your Acai Bowl with friends to ordering a Taxi, Smartphones and mobile apps have it all and are everywhere and with each passing day there are innovative people coming up with more and more ideas. This makes being a Mobile Application Developer a high demand title.

Mobile Application Developers can become highly skilled within their field either with select platforms like IOS and Android or can be experts in hybrid platforms like PhoneGap and React Native. Regardless of what you choose, a good pay and constant demand is what you can expect.

A role like this includes writing code and developing applications from scratch, therefore a background in programming will be a priority.

A senior App developer will likely be earning upwards of £82,000 a year.

 

Cybersecurity Expert

Internet touches every aspect of our day to day lives and the whole world has become an every-growing cyberspace. Everyone who is active on the internet will have their personal information floating about somewhere, which is why Cybersecurity should be a top priority for all companies, therefore Cybersecurity Experts are in high demand.

Where there’s data there is also a chance of it being misused, erased or tampered with. Making sure there is not unauthorised access is the Cybersecurity experts job. They deal with preventing cyber attacks through their expertise on the subject and their in-depth knowledge of databases, networks, hardware, firewalls and encryption.

Before you can get hired as a Cybersecurity expert there are sever certifications that tend to the specialisation of cybersecurity and are needed to be acquired. Having a high level of attention to detail as well as a fine eye for detecting anomalies in a system are must haves when it comes to being a cybersecurity expert.

You can expect to be paid an average base salary of £79,000 a year.

These are just a few of the most sought-after tech jobs for 2018, being an ever-evolving industry means there will be something for everyone.

Source: Irish Tech News

If you’re looking to take the next steps, contact Hanson Regan today Info@hansonregan.co

Master Pool with AR Pool Live Aid

Master Pool with AR Pool Live Aid

Poolaid is going to help you become masters at all pool games. 

Poolaid is going to help you become masters at all pool games. Especially fantastic news for those of us that are challenged at controlling a cue stick, this will be the AR experience for you. 

A team of students from the University of the Algarve in portugal are the designers behind it. Poolaid creates real-time light predictions of billiard shots. The projector hangs above the table and analyzes the positions of the billard ball, then detects lines that correspond to it in relation to the cue, which is projects onto the table. 

 

The students had this to say:

"We developed an algorithm that tracks and analyzes the ball's position. It detects lines that match up with the cue. The computer's connected to the projector too, so it updates right away."

Poolaid isn't the only projection mapping tool for pool that exists. there are many AR experiences that are interactive with a pool table. Obscura digital released a stunning billards experience with an interactve media production called Cuelight

More recently, Openpool launched a kickstarter for their project of projection mapping kit for billards which allows you to play billards with beautiful interactive visual effects.

AR is entering exciting realms, making the world around us digitally interactive and I am looking forward to what else it has in-store for us. 

Following up CVs

Following up CVs

Recruitment experts suggest that every application should be followed up within 7-10 days if you have not had a personalised response.

The moment you begin sending out CVs, start keeping a log and a set up a tracking method.

Recruitment experts suggest that every application should be followed up within 7-10 days if you have not had a personalised response. If you wish to follow up before then, e-mail them a quick note asking if they received and were able to read your CV, (or if they require a different format for their database), or better still, pick up the phone.

CV follow up:

• After you've sent your CV to contacts and acquaintances asking for their support during your job search.
• After you've sent cover letters and CVs to employers, regardless of whether they have a specific job opening.
• After you've had a networking meeting with someone.

How to follow up:

By (short!) email:

• Put your full name and the title of the position you've applied for in the subject line.
• Write a professional note that reiterates your qualifications and interest in the job.
• Attach your resume again.
• Include your full name in the file name of your resume.
• Changes to the Companies Act 2006 mean you must include your Company Name, Registered Address, Company Registration Number and Place of Registration in all your corporate emails.

By phone:

• Keep it short and sweet. Introduce yourself and remind the recruiter that you submitted a resume recently. Make sure you state exactly what job you're interested in. You can also ask if they received your resume and if they're still considering candidates for the position. In a difficult market, with more contractors chasing jobs, a phone call is likely to help you stand out more.
• Always try a few times to speak to someone if you get a recorded message at first.
• Try to strike a balance when following up – call too many times and you may achieve the opposite of your desired reaction!

If you’re looking to take the next steps, contact Hanson Regan today Info@hansonregan.com

Source: ContractorUK

Why don't businesses care about cyber security?

Why don't businesses care about cyber security?

Businesses are still not getting the Cyber security basics right and they are not learning from past incidents. According to Troy Hunt, Pluralsight author and security expert, few businesses are learning from others’ past mistakes which is proven by cyber security incident after incident.

Businesses are still not getting the Cyber security basics right and they are not learning from past incidents. According to Troy Hunt, Pluralsight author and security expert, few businesses are learning from others’ past mistakes which is proven by cyber security incident after incident.

“A good example of this is the BrowseAloud compromise that hit thousands of government websites and organisations in the UK and around the world,” he told Infosecurity Europe 2018 in London.

Despite the fact this had a fairly significant impact, many organisations have not learned the lesson and most websites are not applying a free and easy fix, including those belonging to some UK and US government departments and some major retailers.”

 

The problem was caused by the corruption of a file in the Browsealoud website accessibility service that was automatically executed in the browsers of visitors to the site.

In addition to running the BrowseAloud service in the browser, the corrupted file also launched cryptocurrency mining software to enable the attackers to tap into the computing resources of visitors to affected sites to mine Monero cryptocurrency for the benefit of the attackers.

“This can be stopped with the use of a content security policy (CSP), which is just a few lines of code organisations can add free of charge to their websites to ensure that only approved scripts run automatically when they use third party services like BrowseAloud,” said Hunt.

“Despite the incident highlighting this issue, barely anyone is using CSPs. In fact, only 2.5% of the world’s top one million websites currently use CSPs to defend against rogue content,” he said.

Hunt said a cryptocurrency miner was perhaps the one of the “most benign” forms of content attackers could have chosen to launch through the compromised BrowseAloud file. “In reality, we got off lightly this time around, but we have not seen any significant action by website owners in response.”

This incident underlines the fact that many websites use services and content from third parties, which represents a security risk because attackers could compromise this is the way that the BrowseAloud file was compromised and execute malicious code through millions of websites.

“An analysis of the US Courts website reveals that its home page represent 2.3Mb of data, which is the same size as the entire Doom game, and that almost a third of that is scripts, which is rather a lot of active content that is automatically loaded into visitors’ browsers, especially when you consider that you can do just about anything with JavaScript,” said Hunt.

Compounding the problem, he said, is that most organisations are poor at detecting malicious activity, which was well illustrated by the Sony Pictures cyber attack in 2014. “Various systems were compromised at the same time and different types of data stolen, but the first the company knew of it was when employees attempted to login and were greeted with a message saying: ‘You’ve been hacked’.”

According to Hunt, who runs the HaveIBeenPwned website that aggregates breached records and makes them searchable for those affected, most organisations either have no idea that they have been hacked, and even if they do, they have no idea what data may have been stolen.

“Many of them only find out when they get an email from me telling them that their data is available on the internet,” he said, adding that this underlines that fact that detection is often difficult. “But choosing a breach detection tool can be equally difficult. There are so many suppliers selling breach detection solutions, but it is difficult to work out what actually works.”

Organisations in the dark

Another indicator that organisations are not covering the basics, said Hunt, is that many organisations still have no idea of what company files are exposed to the internet.  

According to security firm Varonis, 21% of all company folders are open to anyone on the internet, and of those open folders, 58% contain more than 100,000 files.

In summary, Hunt said organisations need to assess the state of their cyber security and ensure that at the very least they are addressing the basics because simple, well-known attacks are still working.

Organisations also need to understand that it is easier than ever for cyber attackers to make money out of their data thanks to the advent of cryptocurrencies.

Next, organisations need to understand that their websites and those that their employees visit to do their jobs are made up of code from multiple sources, and any one of these could represent a security risk.

And finally, in the light of the fact that choosing effective and affordable security solutions, organisations should not overlook those that are free and easy to implement.

Source: Computerweekly

BBC will be showing the Wolrd Cup in Virtual Reality

BBC will be showing the Wolrd Cup in Virtual Reality

The BBC will be trialling VR and ultra-high definition technology during its coverage of the 2018 FIFA world cup in Russia. This will be among the broadcaster’s Cross platform coverage, that will include TV, radio and digital channels.

The BBC will be trialling VR and ultra-high definition technology during its coverage of the 2018 FIFA world cup in Russia. This will be among the broadcaster’s Cross platform coverage, that will include TV, radio and digital channels.

Matthew Postgate, BBC chief technology and product officer, said: “The BBC has brought major live broadcasting breakthroughs to UK audiences throughout the history of the World Cup. From the very first tournament on TV in 1954 and England’s finest hour in 1966, to the first colour World Cup in 1970 and full HD in 2006. Now, with these trials, we are giving audiences yet another taste of the future.”

The BBC Sport VR – FIFA World Cup Russia 2018 app, which will be available to download for free on Apple, Android, Gear VR, Oculus Go and PlayStation VR, will enable users to watch the 33 matches the BBC is covering in real time.

The application allows various views of each game, including a virtual luxury private box or a seat behind one of the goals.

Viewers can also view live statistics about the game while it is in progress, or watch daily highlights and other on-demand content when there is no game taking place.

The BBC has been working on a number of research and development projects in recent years to prepare for a digital future and cater to consumers who increasingly expect to have customised content delivered to them any time on any device.

This includes the possibility of virtual reality TV in the future, as well as content based on a person’s interests and location.

For best performance when viewing the World Cup matches through VR, a connection of at least 10Mbit/s over WiFi is recommended, and when downloading the VR application, iOS 10 and above and Android 5 and above are needed.

BBC One’s 29 World Cup matches will be streamed in ultra-HD and high dynamic range (HDR) on BBC iPlayer for a limited number of first-come, first-served people – up to tens of thousands.

Recommended for those with a compatible ultra-HD TV and an internet connection of at least 40Mbit/s for the full 3,840-pixel ultra-HD or 20Mbit/s for 2,560-pixel ultra-HD, the stream will be available from the BBC iPlayer home screen once live coverage begins.

The BBC has developed the technology to make these HD streams available alongside Japanese broadcast NHK, a hybrid log-gamma version of HDR designed to improve picture quality.

The broadcaster plans to gather data about its HD trial to help develop its user experience through this medium, and make plans for the future, when people are likely to expect events to be streamed across the internet in high quality to large audiences.

As audiences become more tech-savvy, the BBC has been investigating ways that people might want to consume content in the future. For example, in 2016 the broadcaster spoke about work it was doing on holographic TV development, which could give people a more immersive viewing experience.

The BBC has also run a pilot alongside Microsoft to test how users could use voice control to navigate the BBC iPlayer app, and it aims to redesign its digital iPlayer service by 2020 to better reflect the current content rental and streaming trend.

Source: Computerweekly

Sunrise up Croagh Patrick

Sunrise up Croagh Patrick

We're supporting “Sunrise Up Croagh Patrick”, an annual get-together of friends who climb Croagh Patrick or walk nearby & cycle the Greenway, have a super time & raise funds for worthwhile charities fighting Neurological Diseases.

We're supporting “Sunrise Up Croagh Patrick”, an annual get-together of friends who climb Croagh Patrick or walk nearby & cycle the Greenway, have a super time & raise funds for worthwhile charities fighting Neurological Diseases.

 

 

When we say climbing, its not vertical, no ropes, no scrambling, with a bit of care, its well within the reach of most people. However this year at the same time there will also be a 4km or up to 11km low level walk near Croagh Patrick for those who can’t or choose not to climb.  On Sun July 1st some will be choosing to cycle from Achill or Mulranny to Westport along the Greenway. This is an optional extra and a trial event for this year.

It was initially organised by John Kelly (St Jarlaths 1979). We are proud to be sponsoring Sunrise up Croagh Patrick once again. The event has grown and attracted wider support from many great people who have been affected by Huntington’s Disease, Parkinson’s, Motor Neuron Disease & Dementia.

This year’s event is on 30 June 2018 (Climb, walk & dinner) + July 1st (Cycle) and we will be staying in the Westport Plaza Hotel for two nights from 29 June. Details can be found on the website.

So why not join us for this great weekend of activities in Westport, for the 4th annual #SunriseupCroaghPatrick event. Form a group of your colleagues and friends, or come on your own and mingle with the whole gang. Have a great fun and support very deserving charities.

Register for the event here or if you can’t make it, you can sponsor others who are making the trip.  

What gets a CV binned by an agent?

What gets a CV binned by an agent?

What’s in your CV can make the difference between being put forward for a role or not. But what key factors can ensure that yours stands out from the hundreds of others?

What’s in your CV can make the difference between being put forward for a role or not. But what key factors can ensure that yours stands out from the hundreds of others?

The most often-quoted rule-of-thumb is to keep your CV under two pages, or three at most. But arguably more important than length is ensuring that it is tailored for the role advertised. Particularly important is the front page summary, as if this doesn’t obviously match you to the role, you’re likely to be binned at the first hurdle.

“I once got knocked back for an enterprise architect role because the first page of my CV didn’t include the word ‘C++’,” says one contractor. “I spent the first ten years of my career doing C++, not that it’s relevant to the role anyway – and because it wasn’t on the front page, it didn’t get seen, and I didn’t get put forward.”

But not everyone agrees that a concise CV is required for every role. “Sometimes I like to see a bit more than that,” says recruitment agent Norman von Krause, “as I like to be provided with as much detail as possible, without going mad – particularly when people have had a lot of jobs.”

“Sometimes very senior roles can require more than two or three pages,” agrees Sarah, an IT recruiter for a large agency. “But in those cases the first page should make it very clear what qualifies you for the job.”

Other things guaranteed to get your CV filed in the bin include: “too much colour used, i.e. coloured fonts; fonts that are too wacky – it’s all a bit try-hard; and daft email addresses – stuff like sexybitch@hotmail.com”.

“Pointless information is a no-no on the first page,” says Sarah, and for experienced professionals this can include education details. “I’m not interested in the O-level in Biology you passed twenty years ago.”

So what are the things that will make sure your CV is seen by an agent – and seen by a potential client?

First, you need to make sure it can be found by an agent. Making sure that your CV hits all the potential search keywords – a sort of search engine optimisation in miniature – is perhaps the single most important thing you can do to make sure your CV is at least going to be found in the megalith databases of Monster and Jobserve, not to mention the various internal databases that recruitment consultancies use. So a WCF developer should make sure that their CV contains every possible combination of technologies, roles and acronyms that an agent searching for a WCF developer might look for – for example, .NET, Windows, Communication, Communications, Foundation, Developer, Programmer – and of course WCF.

As one agent wrote in the CUK forum recently, “A computerised search can scan CVs in more depth than I can, in far less time. “If I can do something in 5 minutes, or two hours, with the same net result, which am I going to do?”

“Computerised searches are a fact of life, especially in IT,” adds Sarah.

Most important of all, though, says von Krause, are “CVs that are clear and easy to read. Write about your duties and experience in detail but don't use too many words. We don't want to see an essay.

If you’re looking to take the next steps, contact Hanson Regan today Info@hansonregan.com

 

Source: ContractorUK

AI Today: Who is using it now and how?

AI Today: Who is using it now and how?

AI (Artificial Intelligence) is a versatile tool, but how is it currently being used in business?

Artificial intelligence is all the rage in the enterprise these days. Stories abound about all the gee-whiz capabilities it will bring to our personal and professional lives.

But like any technology, there is usually a fair amount of hype before the reality sets in. So at this point, it is probably worth asking: Who is using AI right now, and how?

AI (Artificial Intelligence) is a versatile tool, but how is it currently being used in business?

Artificial intelligence is all the rage in the enterprise these days. Stories abound about all the gee-whiz capabilities it will bring to our personal and professional lives.

But like any technology, there is usually a fair amount of hype before the reality sets in. So at this point, it is probably worth asking: Who is using AI right now, and how?

AI in Action

In a broad sense, says Information Age’s Nick Ismail, AI is already bringing five key capabilities to the enterprise:

  • Voice/Image Recognition: Applications range from accurately transcribing meetings and sales calls to researching the impact of branding, logos and other visuals on the web.
     
  • Data Analysis: Unstructured data in particular is very difficult to quantify. Using readily available tools, organizations are able to delve into the minutia of their operations, supply chains, customer relations and a wealth of other activities to gather intelligence that is both accurate and actionable.
     
  • Language Translation: Convert one spoken language into another in real time, an increasingly important tool for multi-national corporations.
     
  • Chatbots: Automate the customer experience with a friendly, responsive assistant that can intuitively direct inquires to the proper knowledge base.
     
  • Predictive Analysis: Accurately forecast key data trends, such as cash flows, customer demand and pricing.

To see some of these capabilities in action, check out the new website for Peach Aviation which features an automated response system that provides multi-language support for customer inquiries. The system runs on the Desse AI agent provided by SCSK ServiceWare Corp., and can respond in all languages serviced by the airline: Japanese, English, traditional and simplified Chinese, Cantonese, Korean and Thai. As well, it uses data analysis to continuously monitor questions and answers to provide steadily improved quality. The company reports that out of 100,000 inquiries received in late December and early January, the system was able to provide automatic responses to 87 percent.

Yet another example of AI in action is a joint project by NBCUniversal and CognitiveScale to discern the key elements in a successful Super Bowl ad. The companies used CognitiveScale’s Cortex platform to analyze three years’ worth of game-day commercials and various client-engagement data to derive actionable insights linked to key video concepts, attributes and themes. For instance, the research showed that comedic effects work best with sales messages, while uplifting tones are more effective for branding.

While AI will not write and produce the perfect ad itself, NBCUniversal’s SVP of Corporate Analytics and Strategy Cameron Davies said it provides greater insight into what works and what doesn’t.

“The CognitiveScale platform gives us the ability to consider new ad strategies for companies who want to ensure their ads will be successful when they invest in production and media buying,” he said.

CognitiveScale is also working with organizations in the financial, health care and retail industries by allowing video data to undergo the same analytics processes as voice, image and text.

Baddies Beware

AI is also turning into an effective crime-fighting tool, says Forbes’ Rebecca Sadwick. It turns out that one of the biggest hindrances to modern law enforcement is the bureaucratic inertia that exists in both public and private processes. AI helps overcome these hurdles, bringing much-needed clarity to highly organized criminal enterprises ranging from money laundering to human trafficking to terrorism.

One of the key ways AI helps solve crimes is by lowering the cost on private entities to oversee their transactions. Like any regulatory requirement, compliance is primarily a cost factor for organizations that are focused on profitability. Using third-party AI platforms specifically geared toward identifying suspicious data patterns, companies have not only lowered their costs but increased their chances of detecting nefarious activities. Prior to AI, it is estimated that nearly half of all financial crimes went unnoticed.

As well, banks and financial institutions that have deployed AI in this way actually help law-abiding citizens take part in fighting crime. Every time a legal transaction is processed, a learning algorithm is exposed to the normal patterns of money movement and is thus better equipped to identify transactions that break these patterns.

Technology is a two-way street, of course, so the same technology that is currently helping to fight crime can also be used to conduct it. With enough computing power, an intelligent system might be able to leverage the rising trend of micro-transactions by breaking up large transactions into numerous, perhaps millions, of smaller ones that are harder to detect and track. As well, quantum technology has yet to make its presence known in the criminal underworld (as far as we know, at least), which would open up an entirely new front in the war against cybercrime. (To learn more about quantum computing, see The Challenge of Quantum Computing.)

Clearly, we are in the earliest stages of AI development, so there will no doubt be numerous other ways in which it will affect mainstream enterprise processes as the market matures. Unlike earlier technologies, however, AI is expected to improve with age as it incorporates both human- and machine-generated data to forge a greater understanding of the environment it occupies and how best to navigate it.

And this is likely to be the most profound change of all: the end of lengthy development processes in which new features come out once a year (if that) and can only be implemented by taking infrastructure and data offline. In the future, digital systems will get better with age, all by themselves.

Source: Techopedia

3 Amazing examples of AI in action

3 Amazing examples of AI in action

The capabilities of AI are increasing by leaps and bounds, and machines are beginning to comprehend things at near human level. Some see this as revolutionary progress, while others look on it with caution.

What is the mind? Is it simply a collective sum of networked neural impulses? Is it less or more than that? Where does it begin and where does it end? What is its purpose? Is it the soul? These are questions that have haunted human consciousness for much of its existence. But in this increasingly digital age, we gain exciting new insight into the nature of consciousness by artificially simulating it.

The capabilities of AI are increasing by leaps and bounds, and machines are beginning to comprehend things at near human level. Some see this as revolutionary progress, while others look on it with caution.

What is the mind? Is it simply a collective sum of networked neural impulses? Is it less or more than that? Where does it begin and where does it end? What is its purpose? Is it the soul? These are questions that have haunted human consciousness for much of its existence. But in this increasingly digital age, we gain exciting new insight into the nature of consciousness by artificially simulating it.

Artificial intelligence is somewhat loosely defined, but can generally be understood as a subset of another field called biomimetics. This science (interchangeably referred to as "biomimicry") imitates natural processes within technological systems, using nature as a model for artificial innovation. In nature, evolution rewards beneficial traits by proliferating them throughout the natural ecosystem, and technology shares similar tendencies, in that the technology that yields the most useful results is that which thrives.

As machines develop the ability to learn, compute and act with a level of creativity and individual agency that is virtually human, we as people are confronted with increasingly complex but imminent questions surrounding the nature of AI and its role in our future. But before we delve too deeply into the semantics of artificial intelligence, let’s first examine three ways in which it is already beginning to manifest in our world.

Recognition

Human perception is like a set of input devices on a computer. Visual data hits the human retina and then flows through the optic nerve to the brain. Sound waves hit the outer and then middle ear before the inner ear begins the neuronal encoding process. Touch, smell and taste similarly transform external stimuli to internal neurological activity. And our memory serves as a database within which this sensory information can be cross-referenced, identified and put to use.

The computer reflects human anatomy in its configuration of input, transduction and storage. Cloud technology has evolved into a sort of collective consciousness that stores, vets and distributes shared knowledge and ideas. Image and sound recognition software use camera and microphone hardware to input and cross-reference data with the cloud, in turn outputting an explanation to the user of that which was seen or heard. Recognition apps like CamFind and Shazam basically serve as sensory search engines, while the fields of robotics and automated transportation build machines that use recognition technology to navigate and act within the world with unprecedented independence. (For more on AI's attempts to become more human, see Will Computers Be Able to Imitate the Human Brain?)

CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) has served as one of the most effective validation tools in internet security for many years now. It is well known for blocking automated password breaches with a challenge-response interface that has for long only been human-recognizable. However, a team known as Vicarious has managed to develop a breach for the software using a program that simulates human thought process. The node-based software assesses the CAPTCHA image in stages, and like a human mind, is able to break elements of the image into components that are compared with language characters in a database. CAPTCHA has long been emblematic of the difference between machine intelligence and human intelligence. But with Vicarious’s new innovation, the line between the two is being blurred.

Prediction

There is a great deal of economic incentive for predictive technology. The discipline is used extensively in marketing by gauging customer behavior and data in order to anticipate commercial activity and maximize profits. Analytics help businesses determine where to expend their efforts and achieve the most desirable results, and to help them make the predictions needed to compete in the modern digital economy. The technology is also implemented into some government and policing efforts, which some view as highly useful while others see it as potentially harmful, as the tactics could employ biased statistics and perpetuate discriminatory practices.

But with predictive analytics improving disciplines like medicine and environmental science, there also exists a great deal of potential for social good in both the private and public sectors. Predictive health IT systems work to improve accuracy and efficiency in health science, and elevate preventative medicine to a level that virtually automates it. Intelligent systems employ prediction in order to identify future benefits and avoid potential problems, and can provide assistance to people before they even realize that they need it. (To learn more about predictive analytics, see Predictive Analytics in the Real World: What Does It Look Like?)

Some technology business leaders prefer the term "augmented intelligence" over "artificial intelligence" and argue that threats posed by the technology are very minor compared with their potential benefits. However, some are not so optimistic.

Activism

There are many renowned scientists and technology innovators who believe that artificial intelligence can potentially have catastrophic consequences. Among them is Elon Musk, who now cochairs a nonprofit research organization called OpenAI. Musk, in fact, has stated that he believes artificial intelligence could be humanity’s greatest existential threat, and through OpenAI, he and his team are attempting to cultivate ideas and initiatives in AI that will be geared toward the greater public good. The organization intends to develop AI systems that are open-source friendly, and is currently focused on deep learning research.

Musk justifies the initiative by arguing that it is better to participate in artificial intelligence early enough in its development that it can be steered toward human progress rather than private gain, but without depending on regulation to dictate its terms and purpose. OpenAI maintains a vision for decentralized and crowdsourcedtechnology that maximizes AI’s potential benefits for humankind.

Conclusion

Whether or not the technology will benefit humanity is inherently difficult to predict. But one thing is almost certain: Whoever controls artificially intelligent technology in its early stages will wield considerable power and influence over all of human civilization. Money, labor, government and media are just a few facets of society that will change dramatically by these innovations. And it is up to us to set the technology on the right path while we still have the power to do so.

 

Source: Techopedia

The next big update from SAP

The next big update from SAP

SAP’s next big update includes salesforce, the company is planning to take on Salesforce on multiple fronts, it’s aiming to ties its back-end financial software to the front-office and redefine customer relationship management. SAP has already assembled the CRM elements and the challenging part of putting them together is next.

SAP’s next big update includes salesforce, the company is planning to take on Salesforce on multiple fronts, it’s aiming to ties its back-end financial software to the front-office and redefine customer relationship management. SAP has already assembled the CRM elements and the challenging part of putting them together is next.

SAP CEO Bill McDermott summarised a suite called SAP C/4HANA, this will incorporate the acquisitions of Hybris, Gigya and Callidus Cloud and will cover consumer data, marketing, commerce, sales and customer services.

SAP’s new acquisition, Core Systems, will add to its field services capabilities. The Swiss company uses AI and crowd sourcing to manage field service technicians. Core Systems will become part of SAP’s Service Cloud.

McDermott referenced that the company "was the last to accept the status quo of CRM and is now first to change it. That's a guarantee."

With that McDermott noted the need to revamp their “Legacy CRM systems” that revolve around Sales. SAP along with Oracle’s Siebel systems were the legacy CRM apps upended by Salesforce. Now SAP is trying to paint Salesforce into a legacy corner.

When it came to discuss CRM McDermott cited that it revolves around providing one view of the customer. By aligning and integrating SAP’s core strength with CRM the company aims to differentiate. SAP’s priority with this is machine learning.  SAP Leonardo and integration with SAP S/4HANA, its ERP suite that has about 8,300 customers. Speaking during his keynote, McDermott said.

“There is a direct correlation behind the size of the problems we solve and our existence and relevance. Our greatest validation is our customers success.”

McDermott argued that CRM has to change. "We have moved from 360 degree view of sales automation where some companies focus to 360 degree view of the actual customer," said McDermott. The idea is that the supply chain and transactional data will be connected to the customer record and commerce in any channel on top of SAP Cloud Platform.

Salesforce’s latest quarter highlights the strong demand for a relationship operating system, however Salesforce has already made big strides in becoming that and continues to acquire and grow.

SAP's C/HANA portfolio includes SAP's marketing, commerce, service and customer data and sales clouds. SAP Sales Cloud unites Hybris Cloud for Customer, Hybris Revenue Cloud and CallidusCloud. SAP has consolidated these front-office functions and cloud based CRM efforts under a customer experience management suite.

According to Ray Wang, principal of Constellation Research, SAP has a CRM installed base due to bundling the application with its ERP tools. However, customers are also buying Salesforce even if SAP is running the financials and back office. The two CRM leaders in Wang's view are Salesforce and Microsoft in terms of users. Oracle also has a large installed base.

Add it up and SAP's CRM plans may be more about keeping itself in the loop with customers and gaining enough mind share with enterprises. SAP's Service Cloud is solid, said Wang. "SAP is saying that it is not ready to cede the market to Salesforce," he said. "SAP has a base there and there are Hybris commerce customers that may look to SAP for marketing."

SAP is also looking to shift the CRM conversation from managing sales to gaining productivity and return on investment.

 

“Putting it all together will be a lot of work” Said Wang in regards to how SAP will be able to blend its various moving parts and technologies into a coherent site. “SAP will have to get a level of a common UI” he added that SAP’s Flori design language has turned out better than expected and can bridge gaps between the applications.

 

Selling SAP C4/HANA

To date, SAP's cloud strategy has been fairly straightforward: Acquire companies with installed bases and then cross-sell to bring down customer acquisition costs. Whether it's more recent purchases such as Callidus, Concur or Gigya or older ones like SuccessFactors or Ariba, SAP has mastered the cross-sell and wallet share momentum.

But if SAP is going to become a CRM player with a new customer-first e-commerce spin the company will have to branch out into playing small ball. SAP has historically been about large enterprise deals, but the software market is more direct and land and expand.

SAP acquires Gigya, plans to meld with Hybris, target omnichannel | SAP unveils its Data Hub

Enter Bertram Schulte, chief digital officer of SAP. At SapphireNow, SAP outlined plans to make SAP.com transactional. Schulte's team of 100 people has a simple mission: Simplify the buying process for customers.

Today, customers and partners buy SAP applications and the back-and-fort with contracts, procurement and fulfillment can take weeks, explained Schulte. Adding 10 more users and an extension module goes through a similar process.

SAP.com is now aiming to handle those transactions. "We are establishing the digital channel and it won't be a parallel universe to field sales, but an augmentation," he said. "There will be channel parity."

As a result, the new SAP.com should facilitate more subscriptions over time. "This is also a cultural effort. In big deal scenarios, we don't rely on scalable no touch efforts. We need to think about trial to buy and retention. It's a land and expand way to think about it."

Schulte said that SAP is farther along than it initially thought it would be when the digital initiative launched. While the digital sales efforts may not be a direct fit with C/4HANA, the plan is worth noting. If SAP is really going to challenge Salesforce in CRM it is going to have to play small ball and get some folks to try out its software on the side.

THE C/4HANA, S4/HANA PROMISE

Should SAP's C/4HANA really get traction it's likely to be with customers that have already standardized on S4/HANA ERP as a platform.

The move to launch C/4HANA also illustrates how enterprise software vendors are going for platform plays across multiple functions. If successful, SAP's efforts will rhyme with what Microsoft is doing with Microsoft 365. Think enterprise software buffet.

Microsoft and SAP both commit to running SAP HANA on Azure internally | Mining player CPM goes live with SAP Leonardo in Australia

But first, those S4/HANA standardization efforts need to pick up. To that end, Accenture announced at SapphireNow that it has rolled S/4HANA out broadly to 15,000 users. According to Dan Kirner, Accenture's deputy CIO, S/4HANA was coupled with Microsoft Azure to support diverse units, add real-time analytics and financial reporting, integrating mergers and acquisitions and being able to add SAP's new technologies onto the S/4HANA base.

Enterprises learning to love cloud lock-in too: Is it different this time?

That last point is critical if C/4HANA is going to be a big success. Accenture, a key SAP systems integration and consulting partner, runs on SAP across the company. "The whole ERP market is moving that way (to a platform)," said Kirner. "We look at SAP as an overall suite whether it's finance, SuccessFactors, Ariba or Concur."

Accenture took a year to fully roll out S/4HANA and it's among the first companies of its size to complete an implementation on S/4HANA and Azure.

Those early S/4HANA customers are going to be an initial target customer base for C/4HANA. It remains to be seen whether SAP's CRM efforts expand beyond its installed base. The company, of course, is optimistic.

"We believe C/4HANA is very differentiated and in line with what modern enterprises are thinking of today when it comes to customer experiences," said Alex Atzberger, president of SAP Hybris. "We don't believe our customers have seen SAP as a CRM choice, but we're now going all-in on CRM again."

 

Source: Zdnet

How AI is helping fight crime

How AI is helping fight crime

Artificial intelligence (AI) is being used both to monitor and prevent crimes in many countries. In fact, AI’s involvement in crime management dates back to the early 2000s. AI is used in such areas as bomb detection and deactivation, surveillance, prediction, social media scanning and interviewing suspects. However, for all the hype and hoopla around AI, there is scope for growth of its role in crime management.

Artificial intelligence (AI) is being used both to monitor and prevent crimes in many countries. In fact, AI’s involvement in crime management dates back to the early 2000s. AI is used in such areas as bomb detection and deactivation, surveillance, prediction, social media scanning and interviewing suspects. However, for all the hype and hoopla around AI, there is scope for growth of its role in crime management.

Currently, a few issues are proving problematic. AI is not uniformly engaged across countries in crime management. There is fierce debate on the ethical boundaries of AI, compelling law enforcement authorities to tread carefully. Defining the scope and boundaries of AI, which includes personal data collection, is a complex task. Problems notwithstanding, AI represents a promise of a new paradigm in crime management, and that is a strong case for pursuance. (For more on crime-fighting tech, see 4 Major Criminals Caught by Computer Technology.)

What Is the Crime Prevention Model?

The crime prevention model is about analyzing large volumes of various types of data from many different sources and deriving insights. Based on the insights, predictions can be made on various criminal activities. For example, social media provides a veritable data goldmine for analysis – though, due to privacy concerns, this is a contentious issue. It is a known fact that radicalization activities by various groups are done through social media. AI can reveal crucial insights by analyzing such data and can provide leads to law enforcement agencies.

There are also other data sources such as e-commerce websites. Amazon and eBaycan provide valuable data on the browsing and purchasing habits of suspects. This model is not new, though. Back in 2002, John Poindexter, a retired admiral of the U.S. Army, had developed a program called the Total Awareness Program which prescribed collecting data from online and offline sources. But following vehement opposition due to privacy intrusion issues, funding support to the program was stopped within a year. 

Real-Life Applications

AI is starting to be used for crime prevention in innovative ways around the globe

Bomb Detection and Deactivation

The results of deploying robots in detecting bombs have been encouraging, which has led to the military procuring robots worth $55.2 million. Over time, robots have become more sophisticated and can distinguish between a real bomb and a hoax by examining the device. According to experts, robots should soon be able to deactivate bombs.

Surveillance, Prevention and Control

In India, AI-powered drones are used to control crowds by deploying pepper spray and paintballs or by making announcements. Drones are fitted with cameras and microphones. Drones, it is believed, will soon be able to identify people with criminal records with facial recognition software and predict crimes with machine learningsoftware.

Social Media Surveillance

Social media provides the platform for executing different crimes such as drug promotion and selling, illegal prostitution and youth radicalization for terrorist activities. For example, criminals use hashtags to promote different causes to intended audiences. Law enforcement agencies in the U.S. have succeeded to an extent in tracking such crimes with the help of AI.

Instagram, for example, is used to promote drug trafficking. In 2016, New York law enforcement used AI to track down drug peddlers. AI searched for millions of direct and indirect hashtags meant to promote drugs and passed on the information to police. Similarly, to tackle radicalization of youth, law enforcement agencies are using AI to monitor conversations in social platforms.

Interviewing Suspects

An AI-powered chatbot in a university in Enschede, Netherlands is being trained to interview suspects and extract information. Expectations from the bot are to examine the suspect, ask questions and detect from the answering patterns and psychological cues whether the suspect is being truthful. The name of the bot is Brad. It is still in the beginning stages, but the development represents a new aspect in crime management.

Advantages and Disadvantages

While these futuristic advances in law enforcement have a lot of potential, one must also consider the drawbacks.

Advantages

Security needs and considerations are dynamic and complex, and you need a system that adapts quickly and efficiently. Human resources are capable, but have constraints. In this view, AI systems have the advantage of being able to scale up to do their jobs more efficiently. For example, monitoring possible criminal activities on social media, from a manual perspective, is a gargantuan task. Human approaches can be erroneous and slow. AI systems can perform this task by scaling up and performing the tasks faster.

Disadvantages

Firstly, for all the hype around, AI’s involvement in crime management is still in the nascent stage. So, cut the hype and accept that its efficiency in crime prevention or control on a larger scale is still unproven.

Second, crime prediction and prevention will require data collection, much of which could be personal data. This makes the government and law enforcement agencies vulnerable to extreme criticism from citizens and other groups. This will be interpreted as intrusion on citizens’ freedom. Data collection and snooping have been extremely contentious issues in the past, especially in democratic countries.

Third, developing AI systems that learn from unstructured data can be an extremely challenging task. Since the nature of criminal activities have been becoming more sophisticated, it might not always be helpful to provide structured data. It is going to take time for such systems to adapt.

Conclusion

Currently, there are many challenges confronting the involvement of AI systems in crime management. However, it is worth the effort to engage AI in crime prevention and control. The nature of crime and terrorist activities is evolving to become more sophisticated every day, and purely human involvement is no longer enough to tackle such problems. In this context, it may be important to note that AI will not replace human beings, but will complement them. AI systems can be fast, accurate and relentless – and it is these qualities that law enforcement agencies will want to exploit. As of right now, it seems that AI will continue to become even more prominent in law enforcement and crime prevention.

Source: Techopedia

What’s stopping the adoption of machine learning?

What’s stopping the adoption of machine learning?

The latest advances in machine learning are currently rocking the market with Artificial intelligence (AI) leading the way as the most revolutionary technology. Recent studies show that 67% of business executives look at AI as a means to automate processes and increase efficiency. Everyone is talking about AI as it looks like it is going to change our world forever.

The latest advances in machine learning are currently rocking the market with Artificial intelligence (AI) leading the way as the most revolutionary technology. Recent studies show that 67% of business executives look at AI as a means to automate processes and increase efficiency. Everyone is talking about AI as it looks like it is going to change our world forever.

General consumers believe AI to be a potential instrument to increase social equity, with 40% believing that AI will expand access to most fundamental services like medical, legal, transportation for those with lower income.  However, the adoption of AI for automation processes could be much higher, there are a few issues that are currently blocking it.

 

Lack of Organisation

Companies are made up of many organisation heads that need to make the decisions, CIO, CDO and CEO. All these officers run their own departments, which are supposed to drive their AI efforts together, at the same time and with the same level of effort, which sounds easy enough on paper, but in real life rarely happens.

Clarifying who is responsible for spearheading the machine learning project and its implementation within the company is the first step. Where several data and analytics teams need to sync up their operations, it is not unusual that they end up diluting their work on an assortment of smaller projects that although do contribute to the understanding of machine learning but end up failing to achieve the automation efficiency needed by the core business.

Insufficient training

Recent developments to deep learning algorithms has helped machine learning take a massive leap forward, though, the technology is old and new as basic AI dates back to the early 80’s. True specialists, though, are far and few between as companies like Google and Facebook scoop up 80% of machine learning engineers as they possess in depth knowledge of that field.

Many companies know their limits and no more than 20% think their own IT experts possess the skills needed to tackle AI. Demands for Machine learning skills are growing very quickly, but those who possess the necessary expertise in deep learning algorithms may lack the relevant qualifications. Because this field is still new many who are paving the way today are old-time programmers from an era where degrees in machine learning didn’t exist.

Inaccessible Data and Privacy protection

AI’s need to be fed a lot of data before they can begin to learn about anything through learning algorithms. However, most of this data is not ready for consumption, this is especially true for unstructured data.  Data aggregation processes are complex and time-consuming, especially when the data is stored separately or with a different processing system. All these steps need the full attention of a specifically dedicated team composed of a different kind of experts. (For more on data structure, see How Structured Is Your Data? Examining Structured, Unstructured and Semi-Structured Data.)

Data extraction is also often unusable whenever it contains vast amounts of sensitive or personal information. Although obfuscation or encryption of this information eventually makes it usable, additional time and resources must be devoted to these burdensome operations. To solve the problem upstream, sensitive data that needs to be anonymized must be stored separately as soon as it is collected.

 

Trust and Believability

When a deep learning algorithm cannot be explained in a simple way to a person who is not an engineer or programmer, those who may be interested in AI to harness new business or opportunities may start to dwindle. This seems to be especially true in some of the more traditional industries. Most of the time, in fact, historical data is practically non-existent, and the algorithm needs to be tested against real data to prove its efficiency. It is easy to understand how in some industries such as oil & gas drilling, a less-than-optimal result may lead to substantial (and unwanted) risks.

Many companies that still lag behind in terms of digital transformation might need to revolutionize their whole infrastructure to adopt AI in a meaningful way. Results might require a long time before they're visible, as data needs to be collected, consumed and digested before the experiment bears fruit. Launching a large-scale machine learning project with no guarantee that it is worth the investment requires a certain degree of flexibility, resources and bravery that many enterprises simply might lack.

 

Conclusion

In a curious turn of events, many of the roadblocks that still slow or stall the advancement of AI are linked to human nature and behaviours rather than to the limits of the technology itself.

There are no definite answers for those who still doubt the potential of machine learning. This is a path that has never been trodden, and field experimentation is still needed during this development phase. Once again, it is our turn to leverage one of the characteristics that helped humanity achieve its most extraordinary heights: our ability to adapt. Only this time we need to teach this skill to our intelligent machines.

The 5 most amazing AI advances in Health Care

The 5 most amazing AI advances in Health Care

Artificial intelligence is revolutionizing our world in many unimaginable ways. At the verge of the Fourth Industrial Revolution, humanity is currently witnessing the first steps made by machines in reinventing the world we live in. And while we keep debating about the potential drawbacks and benefits of substituting humans with intelligent, self-learning machines, there's one area where AI's positive impact will definitely improve the quality of our lives: the health care industry.

Artificial intelligence is revolutionizing our world in many unimaginable ways. At the verge of the Fourth Industrial Revolution, humanity is currently witnessing the first steps made by machines in reinventing the world we live in. And while we keep debating about the potential drawbacks and benefits of substituting humans with intelligent, self-learning machines, there's one area where AI's positive impact will definitely improve the quality of our lives: the health care industry.

Medical imaging

Machine learning algorithms can process unimaginable amounts of info in the blink of an eye. And they can be much more precise than humans in spotting even the smallest detail in medical imaging reports such as mammograms and CT scans.

The company Zebra Medical Vision developed a new platform called Profound, with algorithm-based analysis of all types of medical imaging reports that is able to find every sign of potential conditions such as osteoporosis, breast cancer, aortic aneurysms and many more with a 90 percent accuracy rate. And its deep learning capabilities have been trained to check for hidden symptoms of other diseases that the health care provider may not have been looking for in the first place. Other deep learning networks even earned a 100 percent accuracy score when detecting the presence of some especially lethal forms of breast cancer in biopsy slides.

 

Computer-based analysis is so much more efficient at (and less costly than) interpreting data or images than humans, that some have even argued that in the future it could become unethical not to substitute AI in some professions such as radiologists and pathologists! (For more on IT in medicine, see The Role of IT in Medical Diagnosis.)

Electronic Medical Records (EMRs)

The impact of electronic medical records (EMRs) on health information technology is one of the most controversial topics of debate of the last decade. According to some studies they represent a turning point in improving quality of care while increasing productivity and timeliness as well. However, many health care providers found them cumbersome and difficult to use, leading to substantial technology resistance and widespread inefficiency. Could the newer AI-driven software come to the rescue of the many doctors, nurses and pharmacists fumbling every day with the unwieldy clunkiness of EMRs?

One of the biggest issues with this new health care technology is that it forces clinicians to spend way too much of their precious time performing repetitive tasks. AI can easily automate them, however, for example by using speech recognition during a visit to record every detail while the physician talks with the patient. Charts can and will include much more detailed data that could be collected from a variety of sources such as wearable devices and external sensors, and the AI will feed them directly into the EMR.

But moving forward from the first step of data collection, when enough relevant info is correctly understood and extrapolated by deep learning algorithms, it can be used to help improve quality of care in a lot of ways. It can enhance patients’ adherence to treatment and reduce preventable events, or even guide doctors via predictive AI analytics in treating high-cost, life-threatening conditions. Just to name a practical example, a recent study published in the JAMA Network found how the big data extracted from EMRs and digested by an AI at the University of California, San Francisco Health helped with the treatment of potentially lethal Clostridium difficile (C. diff) infections.

And it's easy to see how much medical record data mining is going to be the next “big thing” in health care, when none other than Google launched its own Google DeepMind Health project to improve the speed, quality and equity of access to care.

Clinical Decision Support (CDS)

Another interesting example of deep learning can help machines make better decisions than their human counterparts is the proliferation of clinical decision support(CDS) tools.

These tools are usually built into the EMR system to assist clinicians in their work by suggesting the best treatment course, warn of potential dangers such as pharmacological interactions or previous conditions, and analyse even the slightest detail in a patient’s health record.

An interesting example is MatrixCare, a software house that was able to integrate Microsoft's famous AI Cortana in their tool used to manage nursing homes. The potent analysis capabilities of the machine learning engine strengthened the decision-making ability of the support tools incommensurably.

“One doctor can read a medical journal maybe twice a month,” explained CEO John Damgaard, “Cortana can read every cancer study published in history before noon and by 3 p.m. is making patient-specific recommendations on care plans and improving outcomes.”

CDS also brings forward the argument that machines are able to communicate with each other much better than humans do. In particular, different medical devices can all be connected to the internet just like any other internet of things (IoT) device (wearables, monitors, bedside sensors, etc.), and to the EMR software as well. Interoperability is a critical issue of modern health care as delivery of care fragmentation is a major cause of inappropriate treatment and increased hospitalizations. When led by smart AI, the various EMR platforms become able to “talk” to each other through the internet, increasing cooperation and collaboration between different wards and even different health care facilities.

Drug Development

Developing a new drug through clinical trials is often a very costly affair. Not just in terms of time (we're talking about decades) and dollars invested (the costs may easily reach up to several billion dollars), but human lives as well. Many new pharmaceuticals require, in fact, many years of additional testing on real-world subjects during the so-called post marketing period, and it's not so uncommon that many serious (or even deadly) side effects are discovered many years after a medication has been launched.

Once again, efficient supercomputer-fuelled AI can root out new drugs from a database of molecular structures that no human could ever dare to analyse. A prominent example is Atomwise's AI, which was able to predict two drugs that could put a stop to the Ebola virus epidemic. In less than one day, their virtual search was able to find two safe, already existing medicines that could be repurposed to fight the deadly virus. The best part is that they found a way to effectively react to a pandemic emergency just by scanning through drugs that had already been marketed to patients for years, proving their safety. (To learn more about how technology is guiding drug development, see Big Data's Influence in Medicine and Pharmaceuticals.)

A Leap into the Future

Some of the most amazing technologies are not ready yet, being nothing more than just prototypes, but their implications are so breath-taking that they're still worth mentioning.

One of these is precision medicine, a really ambitious discipline that uses deep genomics algorithms to scan through a patient's DNA looking for mutations and anomalies that could be linked to diseases such as cancer. People like Craig Venter, one of the fathers of the Human Genome Project, are currently working on a new generation of computational technologies that can predict the effects of any genetic alteration, paving the road to individualized treatments and early detection of many preventable diseases.

A Word to the Wise

As excited as we may be because of the huge potential of introducing AI to health care, it is important that we understand its limitations. Using AI in medicine is not devoid of risks, although many of them will be easily overcome once we get accustomed to it.

The maxim “do no harm” is critical to establish some ethical standards that would act as boundaries. Today we're invested in the responsibility of building the framework upon which the future generations will make their decisions.

Source - Technopedia

What the car of the future looks like

What the car of the future looks like

What does the vehicle of the future look like? How does it work? Can it make our world more efficient, safer, and ecologically sound? There is a swirling uncertainty around what lies ahead for the automotive industry, this doesn’t have to be scary, in fact this is very exciting. The potential in uncertainty that people are working to capture as they help build the car of the future.

What does the vehicle of the future look like? How does it work? Can it make our world more efficient, safer, and ecologically sound? There is a swirling uncertainty around what lies ahead for the automotive industry, this doesn’t have to be scary, in fact this is very exciting. The potential in uncertainty that people are working to capture as they help build the car of the future.

IoT in the future of car technology

Before we look to the future, we must look at where we are today. Most of us don’t yet have much contact with the Internet of Things (IoT), it is still sheltered away in closed, controlled, industrial spaces.

When it comes to everyday, most people use IoT in the form of wearable or home assistant devices. In connection with vehicles, we can see that those platforms are important, engaging and transforming the way we go about our daily lives.

That transformation begins with today’s advances in car tech such as telematics and infotainment services. Soon, automatic IoT evolves to over-the-air  updates, self-drive and vehicles interacting with the world around them.

Reaching the 5G mile marker

Right now, the car of the future is just on a practice lap, 5G will give us the green flag to speed up innovation. The millisecond latency of 5G will enable workloads to be shifted, balancing what work gets done in the car, and what gets done in the cloud. This makes access to data faster and allows us to transform onboard architecture of vehicles.

By utilizing edge computing, beamforming and network slicing, cellular network operators will be able to support roads full of self-driving vehicles. But the car of the future does not end its race once we round the turn to 5G. Once we figure out how the car of the future works, then we must decide how we want to use it.

This is what it looks like when 5G gets the green flag.

Swarm Mobility – Connected fleets and car-sharing

So, what are we riding in? Is the car of the future just a sleek, self-driving update of my current car, or is it something completely different? People used to think of cars as “horseless carriages”. The way we think of cars today may be similarly short-sighted.

As the way people travel continues to evolve, automakers are rethinking their products and relationships to customers. What this could mean for public transportation, car-sharing and the transport industry, is very interesting.

Perhaps soon, transportation systems will work more like the IoT swarm robotics that work inside smart factories and distribution centres today. In this model, when a task is assigned, the closest available robot takes the job, or teams up with others to get the job done as efficiently as possible.

If we apply this thinking to vehicles, a “swarm mobility” model could lead to easier travel options and better use of resources. That would mean more uptime per vehicle and less vehicles on the road, but more ways to connect with passengers.

With automotive IoT platforms and future car technologies like the digital car key or connected fleets, there is continual developments in ways to capture the potential of the connected car. While this is an uncertain road ahead, there is great potential for the car of the future.

 

6 questions that stop your invoice getting paid

6 questions that stop your invoice getting paid

Your invoice is more than just a document you must retain for your records or that asks clients to cough up. Your invoice is there to help you get over your final hurdle of a project by getting what you’re owed.

Your invoice is more than just a document you must retain for your records or that asks clients to cough up. Your invoice is there to help you get over your final hurdle of a project by getting what you’re owed.

A poorly designed invoice can be held over you as a reason not to pay, so you really need to anticipate any last-minute questions that could act as a barrier to getting you paid. These 6 fundamental questions should be answered to create a great invoice:

  1. Who is this payment demand from?

It’s incredibly important that the recipient of your invoice knows who it’s from. They get dozens of invoices a week and shouldn’t have to spend a great deal of time trying to figure out the most recent one is from you. Adding contact details is the bare minimum, you should take time building an invoice template that not only reflects your business but your brand.

  1. What work could this be for?

You shouldn’t assume that your client knows what you’re billing them for, they may have a handful of other projects running at the same time. Even for a straightforward project, it’s a great idea to detail exactly what you’re asking to be paid for and remove any doubt. It’s handy to include purchase order numbers, project reference numbers or the projects name on the invoice.

  1. Did it really cost that much?

Quoting for a project can bring up a lot of scary figures for the client, this is a great opportunity to remind them how much work you put in and exactly what you delivered. Try to be as descriptive with each line item as you can.

Include a “notes” section on your invoice, this space could help you remind the client about positive news that happened during the work you’ve done.

  1. What about that issue?

As mentioned early including a notes section can help you address positive news, but as well as that, it can mention unresolved issues that may keep your client from paying you straight away. A lot of projects end with a snagging list at the end. Head off any questions that may have a seemingly never ending back and forth, something like “Mondays meeting we’re agreeing to a list of final changes” this reminds the client that you’re on top of any future changes, but they need to pay you for the work you’ve already completed.

Another way to resolve issues like this, is by including your phone number so the client can contact you to conclude these problems more quickly.

  1. How long have we got before payment is due?

It isn’t pleasant having to chase payments, specify a date with clear payment terms on your invoice then that avoids any awkward questions. If they haven’t paid by the due date then it’s a lot easier to send overdue payment notices.

The most efficient way to get paid within a week of issuing your invoice is by including zero-day terms. This way you’ll be asking the client to pay immediately. If you don’t set a payment date then it becomes the legal maximum which is 30 days, after that you’re entitled to charge interest. It’s more ideal for both parties if you set out your payment terms and the interest you plan to charge on your initial contract as well as your invoices.

  1. How do I pay?

If your client is ready to pay, then you want to make the process as painless as possible. Make sure your invoice has your BACS payment details, so they can send you money directly. Another way is to include immediate payment links to financial platforms like PayPal.

If you are looking to take the next steps feel free to contact Hanson Regan for advice at Info@hansonregan.com

Is IT contracting for you?

Is IT contracting for you?

Have you been thinking about becoming an IT contractor? It has a lot of great perks but can be a little risky. So, let’s take a look and see if it’s the right decision for you.

What’s in it for your client?

There are several reasons why companies like using IT contractors.

  • They’ll be more flexible with hours, more so than permanent staff.
  • They are easier to fire and hire, they are more of a short term commitment.
  • They provide skills that in-house teams might not.
  • But mainly they save money. Without the cost of sick pay, holiday pay, redundancy and national insurance they’re saving money, even if they end up paying you more.

What’s in it for you?

Everyone will have their own reasoning, but the following reasons are the most common:

  • Being your own boss is extremely enjoyable and satisfying.
  • More money - Contractors are usually paid more than the employees they work alongside.
  • Freedom - Contractors can pick and choose when and where they work.
  • Variety – With each new contract provides a new company and this varied skillset makes for a very impressive CV.
  • Less Taxes – Contractors who take professional advice can greatly reduce the amount of tax they pay.

 

The disadvantages

Nothings perfect and if it was, everyone would be doing it.

  • Some skills are unsuitable – It could be that the employer needs a stable workforce where a customer expects to deal with the same employee every time.
  • Less security – You won’t be protected in the same way your permanently employed counterparts are.
  • Uncertainty -  There are no guarantees, if one contract ends there won’t always be another waiting for you.
  • Effort – Running your own business is a lot of paperwork, rules to obey and accounts keeping.
  • Lonely – Being on your own can be lonely and as well as that there’s no one to pay you even if you’re off sick or in need of a holiday.

 

Who makes a great contractor?

The qualities a contractor needs are different to those who are permanently employed

  • Ability to adapt – With different sites, conditions, tools and culture, you’ll need to be great at adapting to all those different conditions that each new contract will bring. Those that can’t do this will struggle, especially if it’s your first time.
  • Ability to build relationships – A great contract