Have you come across deep learning library in the world is Google’s TensorFlow? If not it’s an ideal opportunity to think about existing models for object detection, face detection, face recognition. In any case, I discovered a portion of these models not to shine with optimal performance, while other models would perform entirely well in the browser. Moreover, you may find this pretty astonishing if you think about the potential of in-browser machine learning and all the possibilities libraries such as tensorflow.js.
For the time being, how about we get to know TensorFlow.js
TensorFlow.js, an open-source library you can use to define, train, and run machine learning models entirely in the browser, using JavaScript and a high-level layers API. In case you are the developer of JavaScript who’s new to ML, TensorFlow.js is a great way to start learning.
Aside from this, everybody is by all accounts discussing AI and ML around. Also, I should state, I generally was flabbergasted by things individuals are doing with and as a front-end developer. I’m talking about Siri, Alexa, Tesla, and even Netflix to recommend your next TV show using AI-based concepts. So if they can why can’t you?
Prior to Further ado, we should dig into how can it work? This is about science! Our psyche is loaded with Neurons, we get contributions from our different faculties and producing the yield through the axon. Attempt to thoroughly consider this thing and convert it into a scientific model. Furthermore, an invention called TensorFlow is one of the most successful approaches to creating such neuron networks.
With WebGL, we can run our JS code using our graphics card memory, Which is much more powerful and quicker than our RAM memory.
Image classifiers
Image classifiers are able to understand a computer system and group pictures as people do. For instance Facebook-have, you considered how the stage consequently labels your photograph? How the hell they know? We will be creating 2 examples here:
- Upload an image, and let the browser classify wheat’s in it.
- Wire up our webcam, and let the browser classify what’s we see in the webcam.
Image upload classifier
Step 1- Install TensorFlow & Mobilenet
1 2 3 4 | npm i @tensorflow/tfjs npm I @tensorflow-models/mobilenet |
Step 2- Import mobile net in your component
1 2 3 4 | import * as mobilenet from '@tensorflow-models/mobilenet'; |
Step 3- Load the mobile net model
Let’s load the model OnInit and also add loading indication:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | @Component({ selector: 'app-image-classfier-upload', templateUrl: './image-classfier-upload.component.html', styleUrls: ['./image-classfier-upload.component.scss'] }) model: any; loading: boolean; async ngOnInit() { this.loading = true; this.model = await mobilenet.load(); this.loading = false; } } |
Step 4- Prepare the HTML
Time to add our input file section, along with the loading indication to the template:
1 2 3 4 5 6 7 8 9 10 11 12 | <div class="cont d-flex justify-content-center align-items-center flex-column"> <div class="custom-file"> <input type="file" class="custom-file-input" (change)="fileChange($event)"> <label class="custom-file-label">Select File</label> </div> <div *ngIf="loading"> <img src="./assets/loading.gif"> </div> </div> |
Step 5- Implement the filechange() function
Add the imgSrc class property; which I get from the FileReader, to show the image uploaded preview.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | async fileChange(event) { if (file) { const reader = new FileReader(); reader.readAsDataURL(file); reader.onload = (res: any) => { this.imgSrc = res.target.result; }; } } |
Step 6- Classify the uploaded image
I’ve got the imgSrc, now I can run the model.classify() method and get the model predictions:
1 2 3 4 5 |
Time to update the template this will show the predications:
1 2 3 4 5 6 7 | <div class="list-group"> <div class="list-group-item" *ngFor="let item of predictions"> {{item.className}} - {{item.probability | percent}} </div> </div> |
As a result,
Webcam Classifier
Mobilenet classification can perform not only on static images, we actually can classify and predict live video stream!
Don’t forget to tag HTML5 video tag to our template:
1 2 3 | <video autoplay muted width="300px" height="300px" #video></video> |
Let’s get a hold of the video element with @ViewChild:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | @ViewChild('video') video: ElementRef; Implement the AfterViewInit life cycle: async ngAfterViewInit() { const vid = this.video.nativeElement; if (navigator.mediaDevices.getUserMedia) { navigator.mediaDevices.getUserMedia({ video: true }) .then((stream) => { vid.srcObject = stream; }) .catch((err0r) => { console.log('Something went wrong!'); }); } } |
Few things to notice includes:
- Since we are changing DOM elements; it’s time we better use the AfterViewInit life cycle.
- Use navigator.mediaDevices.getUserMedia to make sure we support all browsers.
- We got a promise out of the getUserMedia function which contains the stream, and we set the video native element.srcObject to this stream, and with that, we can see ourselves:
Let’s run the classify() method every 3 seconds on the webcam stream:
1 2 3 4 5 | this.predictions = await this.model.classify(this.video.nativeElement); }, 3000); |
Things to remember:
You don’t have to be a data scientist to do AI anymore.TensorFlow.js is an independent package, you can run it in the browser with a matter of a 1 simple import. Is the future of FE developers is taking part in building AI-based prediction models? I’ll definitely put my money on that.