At home, I have a cool Raspberry Pi 2 and an USB LifeCam VX-6000 webcam that, for now, I don’t use for nothing. So I thought I could probably use both to create a little application that will allow me to spy anyone from it (and view the results from a simple Website).
As it’s now possible to use Windows 10 IoT Core on Raspberry, the best way for me to create that application would be to use a UWP application, developped in C#.
Basically, the code of this app will be simple: I’ll first look for the webcam and configure it:
var videodevices = await DeviceInformation.FindAllAsync(DeviceClass.VideoCapture); var camera = videodevices.FirstOrDefault(d => d.EnclosureLocation != null); if (camera != null) { await InitializeCameraAsync(camera); await InitializeAzureStuffAsync(); }
Unfortunately, after trying this code on my device, it does not work. Indeed, I was unable to get a reference to the camera After some research, the reason is simple: for now, the drivers for my Webcam are not supported (check here for the hardware compatibility list: http://ms-iot.github.io/content/en-US/win10/SupportedInterfaces.htm)
Well, I could have forgive my project but why? It’s a cool idea and we have the Universal Windows Platform: my app running on the Raspberry can also be used on any Windows 10 devices!
So I just changed the target (using my laptop instead of the PI) and, well, the code works fine: the camera is found and is correctly initialized:
private async Task InitializeCameraAsync(DeviceInformation camera) { await Task.Factory.StartNew(async () => { _mediaCapture = new MediaCapture(); await _mediaCapture.InitializeAsync(new MediaCaptureInitializationSettings { PhotoCaptureSource = PhotoCaptureSource.VideoPreview, StreamingCaptureMode = StreamingCaptureMode.Video, VideoDeviceId = camera.Id }); // Find the highest resolution available VideoEncodingProperties maxResolution = null; var max = 0; var resolutions = _mediaCapture.VideoDeviceController.GetAvailableMediaStreamProperties(MediaStreamType.Photo); foreach (var props in resolutions) { var properties = props as VideoEncodingProperties; var res = properties; if (res?.Width * res?.Height > max) { max = (int)(res.Width * res.Height); maxResolution = res; } } await _mediaCapture.VideoDeviceController.SetMediaStreamPropertiesAsync(MediaStreamType.Photo, maxResolution); await Dispatcher.RunIdleAsync(async args => { // Display camera preview CaptureElement.Source = _mediaCapture; await _mediaCapture.StartPreviewAsync(); }); _imageEncodingProperties = ImageEncodingProperties.CreateJpeg(); }); }
The goal of the application is to take a photo from the Webcam and upload it to an Azure blob storage so we have some Azure stuff to initialize too:
private async Task InitializeAzureStuffAsync() { await Task.Factory.StartNew(async () => { var storageCredentials = new StorageCredentials(BLOB_ACCOUNT_NAME, BLOB_ACCOUNT_KEY); var storageAccount = new CloudStorageAccount(storageCredentials, true); var blobClient = storageAccount.CreateCloudBlobClient(); _imagesContainer = blobClient.GetContainerReference("images"); if (!await _imagesContainer.ExistsAsync()) { await _imagesContainer.CreateIfNotExistsAsync(); } var serviceProperties = await blobClient.GetServicePropertiesAsync(); serviceProperties.Cors.CorsRules.Clear(); serviceProperties.Cors.CorsRules.Add(new CorsRule { AllowedHeaders = new List<string> { "*" }, AllowedMethods = CorsHttpMethods.Get | CorsHttpMethods.Head, AllowedOrigins = new List<string> { "*" }, ExposedHeaders = new List<string> { "*" } }); await blobClient.SetServicePropertiesAsync(serviceProperties); }); }
The code itself is pretty simple: we check if the blob container “images” exists and, if not, we create it. Then, we change the properties of the container to allow any origins to access it (otherwise, we’ll have a cross domain origin exception).
Once this part is done, we just need to take the pic from the Webcam. As we want to create a spy system, we need to take more than one pic so we’ll use a timer to take our pics:
private void OnStartWatchingButtonClick(object sender, RoutedEventArgs e) { if (_timer == null) { _timer = new DispatcherTimer(); _timer.Interval = new TimeSpan(0, 0, 1); _timer.Tick += OnTimerTick; _timer.Start(); } StartWatchingButton.IsEnabled = false; StopWatchingButton.IsEnabled = true; } private async void OnTimerTick(object sender, object e) { if (_mediaCapture == null) return; using (var memoryStream = new InMemoryRandomAccessStream()) { try { await _mediaCapture.CapturePhotoToStreamAsync(_imageEncodingProperties, memoryStream); await memoryStream.FlushAsync(); memoryStream.Seek(0); var array = new byte[memoryStream.Size]; await memoryStream.ReadAsync(array.AsBuffer(), (uint)memoryStream.Size, InputStreamOptions.None); if (array.Length <= 0) return; var blockBlob = _imagesContainer.GetBlockBlobReference("Image.jpg"); await blockBlob.UploadFromByteArrayAsync(array, 0, array.Length); } catch (Exception ex) { Debug.WriteLine("Exception: " + ex.Message); } } }
Every second, the Tick event of the timer is raised and, using the CapturePhotoToStreamAsync method, we get a pic from the Webcam and we use the method from WindowsAzure.Storage to upload it to the blob storage.
So our application is up and running, pushing a pic to the blob storage every second. So we now need a way to view that pic. Let’s go for a simple AngularJS application that perform a GET request to… the URL of our image in the blog container:
"use strict"; app.controller("indexCtrl", ["$scope", function ($scope) { var watchIntervalId; $scope.isRunning = false; $scope.startWatching = function () { $scope.isRunning = true; watchIntervalId = setInterval(function () { var xhr = new XMLHttpRequest(); xhr.onreadystatechange = function () { if (this.readyState === 4 && this.status === 200) { var url = window.URL || window.webkitURL; $scope.ImagePath = url.createObjectURL(this.response); $scope.$apply(); } } xhr.open("GET", "http://XXXXX.blob.core.windows.net/images/Image.jpg"); xhr.responseType = "blob"; xhr.send(); }, 1000); } $scope.stopWatching = function () { $scope.isRunning = false; clearInterval(watchIntervalId); } }]);
Let’s add some user interface component to get a nice app:
<!DOCTYPE html> <html ng-app="CameraWatcherApp"> <head> <title>Camera Watcher</title> <meta charset="utf-8"/> <meta http-equiv="cache-control" content="no-cache"> <meta http-equiv="expires" content="0"> <meta http-equiv="pragma" content="no-cache"> <link rel="stylesheet" type="text/css" href="Content/bootstrap.css" /> <script src="js/vendors/jquery/jquery-2.1.4.min.js"></script> <script src="js/vendors/angular/angular.min.js"></script> <script src="js/vendors/angular/angular-route.js"></script> <script src="js/app.js"></script> <script src="js/controllers/indexCtrl.js"></script> </head> <body> <div class="container" ng-controller="indexCtrl"> <h1>Camera Watcher <small>Your personal spy</small></h1> <div class="row"> <div class="col-md-12"> <div class="center-block"> <img ng-hide="!isRunning" src="{{ImagePath}}" class="img-rounded" style="width: 800px; height: 600px; display: block; margin: 0 auto;" alt="Camera Watcher"/> <br/> <div class="text-center"> <div class="btn-group"> <button class="btn btn-default btn-lg" ng-click="startWatching()" ng-disabled="isRunning"><span class="glyphicon glyphicon-play" aria-hidden="true"></span> Start Watching</button> <button class="btn btn-default btn-lg" ng-click="stopWatching()" ng-disabled="!isRunning"><span class="glyphicon glyphicon-stop" aria-hidden="true"></span> Stop Watching</button> </div> </div> </div> </div> </div> </div> </body> </html>
And you’re done! Now, if you run the application, a pic will be send every second and, to view it, just launch the website:
Here is the direct link to the video: https://www.youtube.com/watch?v=XyMMdov-BGk
As you can see, it’s pretty straightforward to implement your own spy system. Of course, when the Windows 10 IoT version will support more devices, you’ll be able to use this code on the Raspberry. But, for now, you need to find another way to access to your webcam and upload the pic from it. For that, I suggest that you take a look at the great article from Laurent, which explain how to do the same upload to a blob storage from a Node JS app: http://blogs.msdn.com/b/laurelle/archive/2015/11/13/azure-iot-hub-uploading-a-webcam-picture-in-an-azure-blob-with-node-js-on-windows-and-linux.aspx?utm_content=buffercdfc0&utm_medium=social&utm_source=facebook.com&utm_campaign=buffer
Happy coding!