Automating Visual Inspections in Energy and Manufacturing with AI (Cloud Next ’19)

Automating Visual Inspections in Energy and Manufacturing with AI (Cloud Next ’19)


[MUSIC PLAYING] MANDEEP WARAICH: My
name is Mandeep Waraich, and I lead the Industrial AI
Initiative for Google Cloud. Thank you so much for joining. Really delighted
to have you here. At Google, we believe that
the goal of every technology should be to enrich
our lives, to take our societies, our
collective humanity, forward. And do so in a
responsible manner. So we are constantly thinking
of ways in which technology, and particularly AI,
can help us realize this bright and
promising future. So we’ve been thinking, how can
we apply our advanced computer vision technology
for solving some of the very hard,
incumbent problems in the industrial sectors. And how can we make these
sectors more efficient and more sustainable? So in the next 50 minutes,
we’ll be talking about how, with industrial
inspection AI, that is powered by the AutoML
Vision technology, can help make industrial
inspections more easier, faster, accurate, and more
importantly, more safer. And we’ll also look at how
two leading companies are applying this technology to
the energy and manufacturing sector. So let’s get started. AI holds great
promise for solving some real-world problems. From detecting glaucoma
with retinal images, to processing millions or
even billions of documents to understand their content,
to automatically moderating unsafe and
inappropriate content, we are applying this technology
across all of these use cases. But we also recognize
that developing this technology, building
these custom vision models, is laborious. And it’s hard. So we wanted to enable
even the non-programmers to be able to tap
into the power of AI. And that is precisely why
we created AutoML Vision. While our standard APIs
are a great powerhouse for pre-trained models on the
massive Google image datasets, AutoML allows you to
train custom models that are specific to your industry
needs, to your use case needs. How do we do that? In a very simple,
clean UI, you are able to upload the images,
labeled images, if you’re looking to classify a problem. Or, you can draw bounding
boxes, as we will take a look, to detect specific objects
within those images. Once you’ve done that,
with a click of a button, you’ve got a model trained. And that model can be used
to detect shark species in this case, or you
can use that to detect defects, anomalies, breakage,
in your specific industrial products. We’re already seeing use cases
with wind turbine degradation inspection, with outages
on solar panel farms, or failures on electric poles. And we’ll be looking into
some of these examples in more detail shortly. At this point, I
want to take a moment to talk about data
protection and privacy. So your datasets, your
images, are your images. All of these custom
trained models are used only on your
use cases by you. Google does not
pull these images into any common repositories
or use this across customers. So your datasets, your images. We’ll take a look at
how this technology can be applied for aerial
inspection in wind turbines. And then an application of
that on the production line in a manufacturing company. But before I begin there, we
want to share Google’s stance on the use of this technology. Google cares deeply
that its technology is used for creating a
positive impact in the world. And in that vein, Google
created AI Principles last June. They set the standard of
the application of these AI technologies. And we abide by these principles
for any work that involves AI. And similarly, for the
use of this technology and for this product, we
expect that this technology be applied in accordance
to the AI principle which prohibit explicitly the
use of this technology for any nefarious purposes. So we’ll now take a look at
how one of the leading energy companies in the world is
applying this technology to create a brighter and
greener future for us all. Let’s take a look at AES
Wind Turbine Inspection. [VIDEO PLAYBACK] – If not the biggest
challenge of our times. Now, we really do
have the technology to address the issue of carbon
footprint greenhouse gases from the electric sector. The AES Corporation
is one of the leaders in new technologies for
renewables and energy storage. It’s a Fortune 500 company. Our mission is accelerating
a safer and greener energy future. – Right now, we have
eight wind farms. Each farm has
different capacity, starting from 50 turbines,
up to 300 turbines. – They cover large spans
of geography and land. They’re spread across
hilltops and mountainsides. – All these turbines
needs annual inspections. Originally, it could take up to
two weeks to do one inspection. We partnered with leading
drone service company Measure. Right now, with drones,
we can do it in two days. And this is safe and quick. – For a wind turbine inspection,
we go out with our pilots. And what we’re looking for
is cracks or defects, things that may need to be repaired. – On a typical inspection, we’re
coming back with 30,000 images. Spending four weeks
reviewing images– I don’t think anyone’s
going to argue that that’s the best use of a
highly trained engineer’s time. – How do we speed
that up and how do you make it 10x more efficient? That’s where machine
learning and AI comes in. – We’ve built a great
end-to-end solution using Google Cloud’s tools and platform. With the AutoML
Vision tool, we’ve trained it to detect damage. We’re able to eliminate
approximately half of the images from
needing human review. The remaining 50% of
their time can now be very focused on
identifying that damage and really determining
the right course of action to remediate it. Moving from reviewing images
to training machine learning models– it’s a much higher
order employment opportunity for people, and one where we’re
trying to develop our team. Google Cloud has
been a great partner. Their technology is consistently
among the world leaders. And just a great partner to
work with, person to person. At the end of the day, we won’t
reach the cleaner energy future without advanced tools,
like machine learning. – Technology will
allow renewable energy to be cheaper than
conventional energy. Artificial intelligence,
robotics– this is really where the
future is all about. [END PLAYBACK] MANDEEP WARAICH: Please
join me in welcoming Nick Osborne from AES. NICHOLAS OSBORN:
Thank you, Mandeep. And thank you to the team that
put that great video together. The power industry is enormous. It touches all of our
lives, and the impacts are felt around the world. The industry
investments are often quoted in the
trillions of dollars. The opportunities
for improvement are often in the billions,
if not tens or even hundreds of billions of dollars. The industry is also going
through significant and profound change. Renewable energy is continuing
to fall dramatically in price. Solar, wind, and
battery energy storage are not just possible,
but practical. The consumer is
also driving change. They are much more aware
of both the opportunities and the costs associated
with their energy use. And the third megatrend
are the new digital tools– Cloud, AI, and many
others that are changing the economics of insight. I’m here today to share one
story, where we’ve partnered with Google to improve
lives by accelerating a safer, cleaner energy future. We call this, our vision, our
aerial intelligence platform. First, a little bit about myself
and the company I work for. I’m Nick Osborne. I’m the business leader focused
on understanding and applying advanced analytic tools,
like artificial intelligence and machine learning, to
applied business cases. Job’s really quite simple. I accelerate, coordinate,
and facilitate the adoption of these new
tools across the organization. AES is a global power company. We’re headquartered
in the United States, but operate in 15
countries around the world. We’ve made a very
significant commitment to reduce our carbon intensity
by 70% by the year 2030. To help us achieve
this, we’ve made some very significant
investments in new technologies. We’re the world leader
in battery energy storage using lithium ion batteries. And we’re also the largest owner
of solar assets in the United States. On a personal note, it
feels good to come home at the end of the
day and know I’m working with a company that’s
putting its money where its mouth is, to
drive that change that is core to our mission. Applying new
technologies is core to how we operate our business. Our drone program is
considered world leading in the energy industry. We developed this program
by partnering with Measure, Measure is a professional
drone services organization, and the Measure ground
control software is an enterprise caliber
drone operations platform. Through this partnership, we’ve
improved the cost, safety, and performance of
our inspections. Another consideration
is that, while we often hear about the
threat of technology taking jobs or eliminating jobs,
that’s clearly not the case with what we’re seeing
in our drone program and many other technologies
that we’re exploring. We now have over
170 pilots trained in our organization, performing
operations in over 100 locations around the world. These are employees
with tremendous value for our company, for their
personal advancement, and their broader career growth. Prior to drones,
these inspections were typically done manually. So it was either someone
climbing up the turbine and then rappelling down
to inspect the blade, or hiking around the turbine
with a large telephoto lens, trying to capture an
angle and trying to see if they could detect damage. Neither of these were as
effective, or as efficient, or as safe, as what we’re
able to do with drones. So using drones, we’re now able
to take that partial inspection that was taking two weeks’ of
time and do a full inspection in two days. Much lower cost,
much higher quality, in a much safer manner. Tremendous improvement in
efficiency and velocity in our organization. But there was one new workflow. Now, when we do a single
turbine inspection– a single turbine has
around 300 images. When we do an entire
field, this means we’re coming back with
30,000 or even 60,000 images. This takes a lot of
meticulous and detailed review to complete the inspection work. So we saw this as
a great opportunity for artificial intelligence. And this is really where
our partnership with Google started to grow. To understand our
journey towards AI, you need to understand
where we started. We started with an
investment in talent. We sent two classes
of six people to Google’s Advanced
Solution Lab for intense training in
supervised machine learning. This cohort became the
foundation for our work in AI. Internally, we referred
to this decision as an no regrets decision. Meaning that we were able to
quickly move forward, make this investment, with little
or no hesitation on our part. A few keys for ROI, is one
is, don’t just send IT people to this training. A lot of the value from
data science in general and this program comes from the
mixture of expertise and ideas that you get when you send
multiple types of people to the program. The second piece of advice– and this is maybe a bit
selfish on my part– is make sure you have
a good commitment to work on your projects
after this training. We only sent high performing
individuals to the training. And the risk with sending
high performing individuals is that they’re
going to get quickly pulled back into their day job. And that’s definitely something
we had to work through as an organization. So this investment
set the groundwork to accelerate our progress that
we were making as a company, and is another example of where
new technologies are increasing opportunities for our employees. From this foundation,
we got to work. We went through a proof
pilot production process, with each step being a stage
gate for further investment. So starting with our proof,
we built a custom TensorFlow model, leveraging the openly
available inception V3 vision model. And it worked. We were able to detect
damage, but it also showed us where our shortcomings were. Our data needed work. And setting up the
end-to-end platform was going to be
difficult. And we were going to need some help. So in speaking with Google about
our progress and our learnings, we discussed the possibility
of partnering on a pilot phase. So in the pilot phase, we
were using Google’s data labeling service and
Google’s AutoML Vision tool to really accelerate our efforts
and boost our efficiency. And again, it worked. False negatives were
seen as a key business risk for organization. So not detecting
damage is something that we weren’t willing to
accept in our inspection process. So using our most restrictive
precision recall metrics, during this pilot phase,
we were able to show that we could eliminate
30% of the images from needing any human review. So that four week review process
was now down to three weeks. Really accelerate our velocity
and our time to action. Time to action has really
become one of those key metrics that we look at
with this project. So this gave us the
commitment and ability to move forward with our
production environment. So our production environment
is a scalable platform for us to label images,
train new models, and manage those
models and production. We’re still iterating and
refining on this model, but we’re, again, showing
some very promising results. We’re now showing that we can
eliminate 50% of the images from needing any human review. And the remaining
50% of the images are now categorized and
classified by type of damage, further improving
our time to action and focusing our engineers on
the most important and most critical types of damage. Going back to data,
one of the things that we had learned about
early on was that our data, we had a lot of
data that was not the quality or level
of consistency we needed for machine learning. So working with Measure, we
developed in nine category classification of damage. This includes things like
cracks, gel coat damage, different types of
de-lamination and splitting, as well as some
non-damage categories, like serial numbers, lightning
protection points, stickers, and whatnot. So we also worked with Google’s
data labeling team to iterate and walk through many, many
edge cases of different types of damage that are out there. We started with a
series of batches, small in size, doing a full
and complete review of all the labels
that were coming back. But as the quality
of labeling improved and our batch sizes
improved, we’ve moved towards a sample basis. We also needed to
develop a platform to manage the labeling effort,
model training, prediction process. Working with Google, we
identified Clear Object to be our local GCP partner
to help us architect and develop our platform using
the latest thinking in cloud and serverless tools
available from Google. Clear Object has
been a great partner and worked to quickly
develop this platform for us. The platform leverages AutoML
for our core modeling engine, Cloud Storage and Cloud SQL
for our image repository and metadata, as well as
Cloud Functions and App Engine to manage our interactions
and orchestrations. Now that we have
this platform, we’re continuing to
improve on the model. But we’re also looking
to expand its use. We’re looking at
new business cases– solar, transmission
infrastructure, and even safety– as well as looking at new
inspection modalities. For example, infrared
and even lidar. We’re also looking at pushing
the model to the edge, or in this case, the drone. So I’m really excited
to hear about what LG is going to be sharing next. Energy is a trillion
dollar business. It impacts lives every day, in
every country around the world. The challenge and the real
world impact are huge. If you’re interested in
working with or for a company that is improving
lives by accelerating a safer, cleaner energy
future, please come talk to me. Mandeep. [APPLAUSE] MANDEEP WARAICH: Thank
you very much, Nick, for that great presentation. So we saw how AutoML Vision can
be used for visual inspections to make them more easy,
faster, accurate, and safer. In speaking with lot of
experts from the industry, we learned that there are
some specific requirements for manufacturing use case. A lot of the time, this
data sits on premise. There are latency requirements. And most of the
image and datasets are in a format that requires
it to be processed on the Edge devices, this be a
mobile phone, this be an Edge TPU, a CPU or a GPU. With our AutoML Vision
on Edge solution, you’re able to take your
custom trained model and then download them
in an Edge device. And you can run those inferences
from your Edge devices. I think you’d much
rather see that in action and hear directly from a
manufacturing company which has deployed these models
on the production line. So it’s a great pleasure
for me to invite Mr. Sungwook Lee from
LG and share more about this initiative. Mr. Lee. [APPLAUSE] SUNGWOOK LEE: Good
afternoon, everyone. Thank you for your attention
to our previous presentation. My name is Sungwook Lee,
and I’m vice president of AI and big data business
unit at LG CNS. It seems there
are many, I sense, passions in our audience today. I think that if
you are like me, I expect that we share
many great hopes to apply AI to lead world solutions. I also hope this short review
of our collaboration with Google AutoML will help you
all in your AI work. Today, we’ll be looking at
how LG CNS and Google has successfully collaborated on AI
image recognition technologies. And how we have been
applying our [INAUDIBLE] to vision inspection systems
and several manufacturing solutions. Let’s begin with a little
background of LG CNS. I think you may know
the name of LG group, but you don’t know about
what kinds of companies are in the LG group. So I want to introduce
some companies. We have LG Electronics,
which produce television and refrigerator. And we have LG
Display, which produces world leading OLED panel. And LG Innotek produces camera
model, so I think half of you have already the LG Innotex
camera in your cell phone. Oh, sorry, smartphone. And LG Chemical produces
electric battery. So now they’re the
world leading company in that [INAUDIBLE] group. So you may know that
almost all the LG group companies are working in
the manufacturing industry. LG CNS supplies the IT solutions
for the LG Group Affiliates and other companies in the
working and the manufacturing industry. We are constantly working on
how to best apply AI technology to improve the
manufacturing processes. And we all know that it can
be really challenging to use big data and AI technology
to ensure product quality on a larger scale production. This is where our discussion of
Google AutoML comes in today. LG CNS had started
working with Google AutoML in the summer of last year. We started our
collaboration after seeing that Google was achieving
in their imaging recognition technology, because we
thought Google AutoML could help to improve vision
inspection for LG production processes. And to our great satisfaction,
our collaboration has been a success. OK, before we worked with
Google AutoML, actually we had already developed
our own in-house AI system. For those of you familiar with
the manufacturing process, you will likely recognize
that the picture there left of the screen is the typical
visual inspection system that relies on the human operators. While many production lines
can’t have a camera and IoT sensors and other
detection technologies– while many production
lines can use a camera, but it is still hard to find
that little defect efficiently. Sometimes non-defective
products are often misjudged as defective,
because of minor factors like small dust particles
or low resolution images. And it is still more
effective to rely on people to complete visual inspections. And while people
get better results, that monotony of visual
inspection made by workers also leads to many
errors as well. To solve this problem,
LG CNS made a transition. We moved from the traditional
visual inspection– the left image
here you can see– to the AI inspection
system shown on the right. I’m sure many of you are also
working on the inspection technology, so you will
be familiar with the trial and error method we did
to improve our system with artificial intelligence. Anyway, with our
in-house system, we increased the
accuracy and performance, and even improved our
process speed and efficiency. It means that we could reduce
our operation costs also. With our in-house
AI system, we were able to apply to over
30 production lines only in LG Group. Some of these include– the first picture,
as you can see– we could improve
the defect detection in LCD and OLED dependence. And under in the
middle of the picture, we could remove impurities
from the optical film. And even improving the quality
control for the production, so automotive fabrics can
be made with our in-house AI system. But even with these
improvement, our system wasn’t working optimally,
because it still required a lot of time and
effort to perform well. And now I will talk a
little about the downside of this system. As is often the
case with success, we also ran into some obstacles. As we expanded the application
of our AI vision inspection into other areas, we have
experienced a shortage of skilled AI developers. It is very hard to hire the good
AI developers for the company. They are located in South Korea. So it is very hard times. When one AI developer
leaves our company, the bad impact is so
big to our company. And while we designed
the AI model, they needed to spend
lots of time and effort to achieve high performance. Additionally, as we developed
the model using servers located at the production site, the
complexity of architecture has been increased, so
it is hard to be solved. So now we require the process
to centrally design, and test with the model to
the [INAUDIBLE],, and to centrally
control the performance of the deployed model in
one integrated system. Collaboration with
Google has been critical to find the
solution to these problems. The performance of Google
AutoML has been truly exciting, even though our AI
experts don’t like it. One of the key areas we needed
to improve in our AI system was our productivity in terms
of the model development time. As you can see in the
diagram on the left, our top arrow bar
shows that it took a roughly seven days to complete
our model before using AutoML. But afterwards, we brought
that down to a mere two hours with Google AutoML. The other area we
needed to improve was the accuracy of our system. In addition to being faster– from the diagram on
the right picture– Google AutoML’s performance
exceeded that of the AI experts many times. Our test results showed
an average 6% improvement in terms of performance
we can expect when using Google AutoML, I think. While we have made advances
using Google AutoML and integrating that with
our visual inspection, we are still facing
several challenges. In many cases, we could not
meet our clients’ requirement. And we found that many of them
comes from low image quality, not from the model that
we made with Google. So to solve this
problem, we recently launched an immediate
pre-processing research team. The members of this
team spend more time on exploratory data analysis
and pre-processing data, and try hard to augment data for
getting better machine learning models. So they came to
spend a lot of time on thinking how to change the
inspection process itself. I estimate now our members
could use their time and effort for more strategy work. Now we are planning to expand
our business into consulting services, so we will
provide expertise to enhance the overall
inspection processes as a one-step solution. We are hopeful that we will
see the first manufacturing visual inspection area where
humans and AI share areas– the responsibility–
very optimally. Do you agree? OK. I would like to announce that we
have built integrated AI vision inspection architecture. So our system and Google
AutoML is connected seamlessly. With this architecture
we will be able to maximize humans’
capability and utilization of Google AutoML. This architecture starts
from the data science parts– the bottom. They will ensure image quality. So they will produce
clear imaging and will send to
the Google AutoML. And Google AutoML
takes the clear imaging and produces an AI
model with efficiency and with effectiveness also. The models– we completely
manage it with all the history data, and performance
status, and automatedly learning processes. With this architecture,
the LG Group now can develop and manage
thousands of AI models simultaneously. In addition to
vision inspection, our goal is to expand
the architecture to the other
manufacturing use cases, to manage the whole factory– equipment, facilities, and
safe things, and so on. I think you may think
of many use cases we can expand this
integrated architecture in the manufacturing industry. To this point, we
have gone over how collaboration with Google
AutoML has improved our visual inspection systems. Now let’s look to the future. Based on our AI integration
success within the LG Group, we will keep going
to be positioned as leading AI visual inspection
total service provider. So we will cover from
the pre-processing area, and we will cover
learning the model, and then we will manage it all
[INAUDIBLE] with Google AutoML. Whether the cause of
poor inspection quality is machine-learning equipment,
or the image quality, or data labeling, or the
operators themselves, working with Google
AutoML, we will strive to achieve our goal of
99.9% accuracy and a leak rate of 0.001% under all conditions. If you were experiencing
similar issues in your industry, I hope that this session
could be helpful. I really appreciate
your attention, and thank you for listening. Thank you. [APPLAUSE] MANDEEP WARAICH:
Thank you, Mr. Lee. Thank you, Mr. Lee. So the goal that Mr.
Lee shared about LG is very much what we share for
our product and for our roadmap as well, which is to make
our inferences faster, our interfaces more
intuitive and easier, and our results more accurate. Within manufacturing, we are
seeing many more use cases beyond automotive, beyond
electronics, into the food, into retail, and
many more categories. And we are very excited
to work on these new use cases with you. We saw how AI and
visual inspection can be applied to the
manufacturing use cases, and we looked at how
this can be applied for the aerial
inspection use cases. Beyond the three use
cases that we talked about on the aerial inspection side,
we are also exploring more work on the agriculture monitoring
and construction site monitoring. As of today, this technology
is available to use in beta. Please visit
cloud.google.com/vision to register your interest. You can use the
technology right away, but by registering at this site,
we are able to partner with you and work with you on
our upcoming releases and our early access program. So we look forward
to hearing from you. Thank you so much for joining
us in this shared vision, and we really look forward
to working with you in creating a more brighter,
more greener, and more positive future. Thank you very much, all. [MUSIC PLAYING]

Leave a Reply

Your email address will not be published. Required fields are marked *