I like to experiment with Computer Vision and AI API’s (like Azure Cognetive Services, Google Cloud Vision, IBM Watson) to see if I can utilise them for some ideas.
The most easy way to test those scripts and APIs them is by directly making a photo and sending image data to the API/script, instead of uploading files. I didn’t find a fast mobile first camera template for HTML5 as a starting point for my prototypes, so I developed one myself. The interface setup is mainly inspired by the standard Android and iOS Cameras.
The template doesn’t do anything with the image(canvas) data yet, I’ll leave that up to you.
Feel free to use it in your next Computer Vision or AI project.
Requirements
WebRTC is only supported on secure connections. So you need to serve it from https (You can test and debug in Chrome from localhost although (this doesn’t work in Safari).
Because it utilises WebRTC you need a recent (mobile) OS and browser. It should work on Android, Firefox and iOS11 and Safari 11.
Functionalities
– Fullscreen mode
– Take Photo
– Flip Camera (environment / user)
– Supports both portrait and landscape mode
Used Libraries
- Fullscreen functionality: Screenfull.js
- Detect WebRTC support: DetectRTC.js
- WebRTC cross-browser: Adapter.js
- UI click sound: Howler.js
Used Assets
- Basic Click Wooden sound – GameAudio
- Material Design Icons (camera front, camera rear, photo camera, fullscreen, fullscreen exit)
Good WebRTC resources
- webrtc.github.io/samples/
- webrtc-experiment.com/
- html5rocks.com/en/tutorials/getusermedia/intro/
- developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia
Credits and a link to this are always appreciated.
I’m always curious how people end up using my stuff, so feel free to contact me or send a tweet@kasperkamperman.
