65.9K
CodeProject 正在变化。 阅读更多。
Home

使用 ASP.Net MVC 的 Face API

starIconstarIconstarIconstarIcon
emptyStarIcon
starIcon

4.85/5 (32投票s)

2017 年 1 月 9 日

CPOL

4分钟阅读

viewsIcon

59021

downloadIcon

3850

在本文中,我们将了解 Face API,并创建一个简单的 ASP.Net MVC 应用程序,使用 Face API 来检测图像中的人脸。

重点内容

  • 什么是认知服务?
  • 什么是 Face API?
  • 注册 Face API
  • 创建 ASP.Net MVC 示例应用程序
    • 添加 AngularJS
    • 安装和配置 Face API
  • 上传图像以检测人脸
  • 标记图像中的人脸
  • 列出检测到的人脸及其信息
  • 摘要

认知服务(牛津项目)

Microsoft 认知服务,以前称为牛津项目,是一套机器学习应用程序编程接口(REST API)、SDK 和服务,可帮助开发人员通过添加智能功能(如情感和视频检测;面部、语音和视觉识别;以及语音和语言理解)来构建更智能的应用程序。请访问网站获取更多详细信息:https://staging.www.projectoxford.ai 

Face API: 

Microsoft 认知服务,有四个主要组成部分:

  1. 人脸识别:在照片中识别面孔,将看起来相似的面孔分组,并验证两张面孔是否相同,
  2. 语音处理:识别语音并将其转换为文本,反之亦然,
  3. 视觉工具:分析视觉内容,查找不当内容或主色调等内容,以及
  4. 语言理解智能服务 (LUIS):使用自然、日常的语言理解用户在说话或打字时所表达的意思。

请访问 Microsoft 博客获取更多详细信息:http://blogs.microsoft.com/next/2015/05/01/microsofts-project-oxford-helps-developers-build-more-intelligent-apps/#sm.00000qcvfxlefheczxed59b9u8jna

我们将在示例应用程序中实现人脸识别 API。那么,Face API 是什么?Face API 是一项基于云的服务,提供最先进的人脸算法来检测和识别图像中的人脸。

Face API 包含

  1. 人脸检测
  2. 人脸验证
  3. 相似人脸搜索
  4. 人脸分组
  5. 人脸识别

在此处获取详细概述:https://www.microsoft.com/cognitive-services/en-us/face-api/documentation/overview

人脸检测:在本文中,我们将重点介绍人脸检测,因此在处理示例应用程序之前,让我们仔细看看 API 参考(Face API - V1.0)。要启用服务,我们需要通过免费注册服务来获取一个授权密钥(API 密钥)。请访问此链接进行注册:https://www.microsoft.com/cognitive-services/en-us/sign-up 

注册 Face API

通过点击其中一个来注册,

  • Microsoft 帐户
  • GitHub
  • LinkedIn

成功加入后,它将重定向到订阅页面。通过选中复选框来请求任何产品的免费试用。

过程:点击“请求新试用” > “人脸 - 预览” > “同意条款” > “订阅”

在这里,您可以看到我附加了我的订阅屏幕截图。在“密钥”列中,点击“密钥 1”下的“显示”以预览 API 密钥,点击“复制”以复制密钥供以后使用。可以通过点击“重新生成”来重新生成密钥。

到目前为止,我们已完成订阅流程,现在让我们开始 ASP.Net MVC 示例应用程序。

创建示例应用程序: 

在开始实验之前,请确保您的开发计算机已安装 Visual Studio 2015。让我们打开 Visual Studio 2015,从“文件”菜单中,点击“新建” > “项目”。

选择“ASP.Net Web 应用程序”,随意命名,我将其命名为“FaceAPI_MVC”,然后点击“确定”按钮继续下一步。为示例应用程序选择“空白模板”,选中“MVC”复选框,然后点击“确定”。

在我们的空白模板中,现在让我们创建 MVC 控制器并通过脚手架生成视图。

添加 AngularJS: 

我们需要为示例应用程序添加包。为此,请转到“解决方案资源管理器”,右键点击“项目” > “管理 NuGet 程序包”。

在“程序包管理器”中,键入“angularjs”进行搜索,选择程序包,然后点击“安装”。

安装“angularjs”程序包后,我们需要在其布局页面中引用它,还需要使用“ng-app”指令定义应用程序根。

如果您是 Angularjs 新手,请在此处获取有关 Angularjs 与 MVC 应用程序的基本概述:http://shashangka.com/2016/01/17/asp-net-mvc-5-with-angularjs-part-1 

安装和配置 Face API

我们需要在示例应用程序中添加“Microsoft.ProjectOxford.Face”库。键入并搜索如下屏幕,然后选择并安装。

Web.Config: 在应用程序的 Web.Config 文件中,将一个新的配置设置添加到 appSettings 部分,并包含我们之前生成的 API 密钥。

<add key="FaceServiceKey" value="xxxxxxxxxxxxxxxxxxxxxxxxxxx" />

最终的 appSettings

<appSettings>
  <add key="webpages:Version" value="3.0.0.0" />
  <add key="webpages:Enabled" value="false" />
  <add key="PreserveLoginUrl" value="true" />
  <add key="ClientValidationEnabled" value="true" />
  <add key="UnobtrusiveJavaScriptEnabled" value="true" />
  <add key="FaceServiceKey" value="xxxxxxxxxxxxxxxxxxxxxxxxxxx" /> <!--replace with API Key-->
</appSettings>

MVC 控制器:这是我们执行主要操作的地方。首先,通过 ConfigurationManager.AppSettings 获取 web.config 中的 FaceServiceKey 值。

private static string ServiceKey = ConfigurationManager.AppSettings["FaceServiceKey"];

在这里,MVC 控制器有两个主要方法来执行人脸检测操作。一种是 HttpPost 方法,用于将图像文件上传到文件夹;另一种是 HttpGet 方法,用于获取上传的图像并通过调用 API 服务来检测人脸。上传图像以检测人脸时,这两种方法都会从客户端脚本调用。让我们分步进行解释。

图像上传:此方法负责上传图像。

[HttpPost]
public JsonResult SaveCandidateFiles()
{
    //Create Directory if Not Exist
    //Requested File Collection
    //Clear Folder
    //Save File in Folder
}

图像检测:此方法负责从上传的图像中检测人脸。

[HttpGet]
public async Task<dynamic> GetDetectedFaces()
{
     // Open an existing file for reading
     // Create Instance of Service Client by passing Servicekey as parameter in constructor
     // Call detection REST API
     // Create & Save Cropped Detected Face Images
     // Convert detection result into UI binding object
}

这是调用 Face API 从上传的图像检测人脸的代码片段。

// Open an existing file for reading
var fStream = System.IO.File.OpenRead(FullImgPath)

// Create Instance of Service Client by passing Servicekey as parameter in constructor 
var faceServiceClient = new FaceServiceClient(ServiceKey);

// Call detection REST API
Face[] faces = await faceServiceClient.DetectAsync(fStream, true, true, new FaceAttributeType[] { FaceAttributeType.Gender, FaceAttributeType.Age, FaceAttributeType.Smile, FaceAttributeType.Glasses });

创建和保存裁剪的检测到的人脸图像。

var croppedImg = Convert.ToString(Guid.NewGuid()) + ".jpeg" as string;
var croppedImgPath = directory + '/' + croppedImg as string;
var croppedImgFullPath = Server.MapPath(directory) + '/' + croppedImg as string;
CroppedFace = CropBitmap(
              (Bitmap)Image.FromFile(FullImgPath),
              face.FaceRectangle.Left,
              face.FaceRectangle.Top,
              face.FaceRectangle.Width,
              face.FaceRectangle.Height);
CroppedFace.Save(croppedImgFullPath, ImageFormat.Jpeg);
if (CroppedFace != null)
   ((IDisposable)CroppedFace).Dispose();

根据人脸值裁剪图像的方法。

public Bitmap CropBitmap(Bitmap bitmap, int cropX, int cropY, int cropWidth, int cropHeight)
{
    // Crop Images
}

最终的完整 MVC 控制器

public class FaceDetectionController : Controller
{
    private static string ServiceKey = ConfigurationManager.AppSettings["FaceServiceKey"];
    private static string directory = "../UploadedFiles";
    private static string UplImageName = string.Empty;
    private ObservableCollection<vmFace> _detectedFaces = new ObservableCollection<vmFace>();
    private ObservableCollection<vmFace> _resultCollection = new ObservableCollection<vmFace>();
    public ObservableCollection<vmFace> DetectedFaces
    {
        get
        {
            return _detectedFaces;
        }
    }
    public ObservableCollection<vmFace> ResultCollection
    {
        get
        {
            return _resultCollection;
        }
    }
    public int MaxImageSize
    {
        get
        {
            return 450;
        }
    }

    // GET: FaceDetection
    public ActionResult Index()
    {
        return View();
    }

    [HttpPost]
    public JsonResult SaveCandidateFiles()
    {
        string message = string.Empty, fileName = string.Empty, actualFileName = string.Empty; bool flag = false;
        //Requested File Collection
        HttpFileCollection fileRequested = System.Web.HttpContext.Current.Request.Files;
        if (fileRequested != null)
        {
            //Create New Folder
            CreateDirectory();

            //Clear Existing File in Folder
            ClearDirectory();

            for (int i = 0; i < fileRequested.Count; i++)
            {
                var file = Request.Files[i];
                actualFileName = file.FileName;
                fileName = Guid.NewGuid() + Path.GetExtension(file.FileName);
                int size = file.ContentLength;

                try
                {
                    file.SaveAs(Path.Combine(Server.MapPath(directory), fileName));
                    message = "File uploaded successfully";
                    UplImageName = fileName;
                    flag = true;
                }
                catch (Exception)
                {
                    message = "File upload failed! Please try again";
                }
            }
        }
        return new JsonResult
        {
            Data = new
            {
                Message = message,
                UplImageName = fileName,
                Status = flag
            }
        };
    }

    [HttpGet]
    public async Task<dynamic> GetDetectedFaces()
    {
        ResultCollection.Clear();
        DetectedFaces.Clear();

        var DetectedResultsInText = string.Format("Detecting...");
        var FullImgPath = Server.MapPath(directory) + '/' + UplImageName as string;
        var QueryFaceImageUrl = directory + '/' + UplImageName;

        if (UplImageName != "")
        {
            //Create New Folder
            CreateDirectory();

            try
            {
                // Call detection REST API
                using (var fStream = System.IO.File.OpenRead(FullImgPath))
                {
                    // User picked one image
                    var imageInfo = UIHelper.GetImageInfoForRendering(FullImgPath);

                    // Create Instance of Service Client by passing Servicekey as parameter in constructor 
                    var faceServiceClient = new FaceServiceClient(ServiceKey);
                    Face[] faces = await faceServiceClient.DetectAsync(fStream, true, true, new FaceAttributeType[] { FaceAttributeType.Gender, FaceAttributeType.Age, FaceAttributeType.Smile, FaceAttributeType.Glasses });
                    DetectedResultsInText = string.Format("{0} face(s) has been detected!!", faces.Length);
                    Bitmap CroppedFace = null;

                    foreach (var face in faces)
                    {
                        //Create & Save Cropped Images
                        var croppedImg = Convert.ToString(Guid.NewGuid()) + ".jpeg" as string;
                        var croppedImgPath = directory + '/' + croppedImg as string;
                        var croppedImgFullPath = Server.MapPath(directory) + '/' + croppedImg as string;
                        CroppedFace = CropBitmap(
                                        (Bitmap)Image.FromFile(FullImgPath),
                                        face.FaceRectangle.Left,
                                        face.FaceRectangle.Top,
                                        face.FaceRectangle.Width,
                                        face.FaceRectangle.Height);
                        CroppedFace.Save(croppedImgFullPath, ImageFormat.Jpeg);
                        if (CroppedFace != null)
                            ((IDisposable)CroppedFace).Dispose();


                        DetectedFaces.Add(new vmFace()
                        {
                            ImagePath = FullImgPath,
                            FileName = croppedImg,
                            FilePath = croppedImgPath,
                            Left = face.FaceRectangle.Left,
                            Top = face.FaceRectangle.Top,
                            Width = face.FaceRectangle.Width,
                            Height = face.FaceRectangle.Height,
                            FaceId = face.FaceId.ToString(),
                            Gender = face.FaceAttributes.Gender,
                            Age = string.Format("{0:#} years old", face.FaceAttributes.Age),
                            IsSmiling = face.FaceAttributes.Smile > 0.0 ? "Smile" : "Not Smile",
                            Glasses = face.FaceAttributes.Glasses.ToString(),
                        });
                    }

                    // Convert detection result into UI binding object for rendering
                    var rectFaces = UIHelper.CalculateFaceRectangleForRendering(faces, MaxImageSize, imageInfo);
                    foreach (var face in rectFaces)
                    {
                        ResultCollection.Add(face);
                    }
                }
            }
            catch (FaceAPIException)
            {
                //do exception work
            }
        }
        return new JsonResult
        {
            Data = new
            {
                QueryFaceImage = QueryFaceImageUrl,
                MaxImageSize = MaxImageSize,
                FaceInfo = DetectedFaces,
                FaceRectangles = ResultCollection,
                DetectedResults = DetectedResultsInText
            },
            JsonRequestBehavior = JsonRequestBehavior.AllowGet
        };
    }

    public Bitmap CropBitmap(Bitmap bitmap, int cropX, int cropY, int cropWidth, int cropHeight)
    {
        Rectangle rect = new Rectangle(cropX, cropY, cropWidth, cropHeight);
        Bitmap cropped = bitmap.Clone(rect, bitmap.PixelFormat);
        return cropped;
    }

    public void CreateDirectory()
    {
        bool exists = System.IO.Directory.Exists(Server.MapPath(directory));
        if (!exists)
        {
            try
            {
                Directory.CreateDirectory(Server.MapPath(directory));
            }
            catch (Exception ex)
            {
                ex.ToString();
            }
        }
    }

    public void ClearDirectory()
    {
        DirectoryInfo dir = new DirectoryInfo(Path.Combine(Server.MapPath(directory)));
        var files = dir.GetFiles();
        if (files.Length > 0)
        {
            try
            {
                foreach (FileInfo fi in dir.GetFiles())
                {
                    GC.Collect();
                    GC.WaitForPendingFinalizers();
                    fi.Delete();
                }
            }
            catch (Exception ex)
            {
                ex.ToString();
            }
        }
    }
}

UI 助手

/// <summary>
/// UI helper functions
/// </summary>
internal static class UIHelper
{
    #region Methods

    /// <summary>
    /// Calculate the rendering face rectangle
    /// </summary>
    /// <param name="faces">Detected face from service</param>
    /// <param name="maxSize">Image rendering size</param>
    /// <param name="imageInfo">Image width and height</param>
    /// <returns>Face structure for rendering</returns>
    public static IEnumerable<vmFace> CalculateFaceRectangleForRendering(IEnumerable<Microsoft.ProjectOxford.Face.Contract.Face> faces, int maxSize, Tuple<int, int> imageInfo)
    {
        var imageWidth = imageInfo.Item1;
        var imageHeight = imageInfo.Item2;
        float ratio = (float)imageWidth / imageHeight;

        int uiWidth = 0;
        int uiHeight = 0;
  
        uiWidth = maxSize;
        uiHeight = (int)(maxSize / ratio);
        
        float scale = (float)uiWidth / imageWidth;

        foreach (var face in faces)
        {
            yield return new vmFace()
            {
                FaceId = face.FaceId.ToString(),
                Left = (int)(face.FaceRectangle.Left * scale),
                Top = (int)(face.FaceRectangle.Top * scale),
                Height = (int)(face.FaceRectangle.Height * scale),
                Width = (int)(face.FaceRectangle.Width * scale),
            };
        }
    }

    /// <summary>
    /// Get image basic information for further rendering usage
    /// </summary>
    /// <param name="imageFilePath">Path to the image file</param>
    /// <returns>Image width and height</returns>
    public static Tuple<int, int> GetImageInfoForRendering(string imageFilePath)
    {
        try
        {
            using (var s = File.OpenRead(imageFilePath))
            {
                JpegBitmapDecoder decoder = new JpegBitmapDecoder(s, BitmapCreateOptions.None, BitmapCacheOption.None);
                var frame = decoder.Frames.First();

                // Store image width and height for following rendering
                return new Tuple<int, int>(frame.PixelWidth, frame.PixelHeight);
            }
        }
        catch
        {
            return new Tuple<int, int>(0, 0);
        }
    }
    #endregion Methods
}

MVC 视图

@{
    ViewBag.Title = "Face Detection";
}

<div ng-controller="faceDetectionCtrl">

    <h3>{{Title}}</h3>
    <div class="loadmore">
        <div ng-show="loaderMoreupl" ng-class="result">
            <img src="~/Content/ng-loader.gif" /> {{uplMessage}}
        </div>
    </div>
    <div class="clearfix"></div>
    <table style="width:100%">
        <tr>
            <th><h4>Select Query Face</h4></th>
            <th><h4>Detection Result</h4></th>
        </tr>
        <tr>
            <td style="width:60%" valign="top">
                <form novalidate name="f1">
                    <input type="file" name="file" accept="image/*" onchange="angular.element(this).scope().selectCandidateFileforUpload(this.files)" required />
                </form>
                <div class="clearfix"></div>
                <div class="loadmore">
                    <div ng-show="loaderMore" ng-class="result">
                        <img src="~/Content/ng-loader.gif" /> {{faceMessage}}
                    </div>
                </div>
                <div class="clearfix"></div>
                <div class="facePreview_thumb_big" id="faceCanvas"></div>
            </td>
            <td style="width:40%" valign="top">
                <p>{{DetectedResultsMessage}}</p>
                <hr />
                <div class="clearfix"></div>
                <div class="facePreview_thumb_small">
                    <div ng-repeat="item in DetectedFaces" class="col-sm-12">
                        <div class="col-sm-3">
                            <img ng-src="{{item.FilePath}}" width="100" />
                        </div>
                        <div class="col-sm-8">
                            <ul>
                                @*<li>FaceId: {{item.FaceId}}</li>*@
                                <li>Age: {{item.Age}}</li>
                                <li>Gender: {{item.Gender}}</li>
                                <li>{{item.IsSmiling}}</li>
                                <li>{{item.Glasses}}</li>
                            </ul>
                        </div>
                        <div class="clearfix"></div>
                    </div>
                </div>
                <div ng-hide="DetectedFaces.length">No face detected!!</div>
            </td>
        </tr>
    </table>
</div>

@section NgScript{
    <script src="~/ScriptsNg/FaceDetectionCtrl.js"></script>
}

Angular 控制器

angular.module('myFaceApp', [])
.controller('faceDetectionCtrl', function ($scope, FileUploadService) {

    $scope.Title = 'Microsoft FaceAPI - Face Detection';
    $scope.DetectedResultsMessage = 'No result found!!';
    $scope.SelectedFileForUpload = null;
    $scope.UploadedFiles = [];
    $scope.SimilarFace = [];
    $scope.FaceRectangles = [];
    $scope.DetectedFaces = [];

    //File Select & Save 
    $scope.selectCandidateFileforUpload = function (file) {
        $scope.SelectedFileForUpload = file;
        $scope.loaderMoreupl = true;
        $scope.uplMessage = 'Uploading, please wait....!';
        $scope.result = "color-red";

        //Save File
        var uploaderUrl = "/FaceDetection/SaveCandidateFiles";
        var fileSave = FileUploadService.UploadFile($scope.SelectedFileForUpload, uploaderUrl);
        fileSave.then(function (response) {
            if (response.data.Status) {
                $scope.GetDetectedFaces();
                angular.forEach(angular.element("input[type='file']"), function (inputElem) {
                    angular.element(inputElem).val(null);
                });
                $scope.f1.$setPristine();
                $scope.uplMessage = response.data.Message;
                $scope.loaderMoreupl = false;
            }
        },
        function (error) {
            console.warn("Error: " + error);
        });
    }

    //Get Detected Faces
    $scope.GetDetectedFaces = function () {
        $scope.loaderMore = true;
        $scope.faceMessage = 'Preparing, detecting faces, please wait....!';
        $scope.result = "color-red";

        var fileUrl = "/FaceDetection/GetDetectedFaces";
        var fileView = FileUploadService.GetUploadedFile(fileUrl);
        fileView.then(function (response) {
            $scope.QueryFace = response.data.QueryFaceImage;
            $scope.DetectedResultsMessage = response.data.DetectedResults;
            $scope.DetectedFaces = response.data.FaceInfo;
            $scope.FaceRectangles = response.data.FaceRectangles;
            $scope.loaderMore = false;

            //Reset element
            $('#faceCanvas_img').remove();
            $('.divRectangle_box').remove();

            //get element byID
            var canvas = document.getElementById('faceCanvas');

            //add image element
            var elemImg = document.createElement("img");
            elemImg.setAttribute("src", $scope.QueryFace);
            elemImg.setAttribute("width", response.data.MaxImageSize);
            elemImg.id = 'faceCanvas_img';
            canvas.append(elemImg);

            //Loop with face rectangles
            angular.forEach($scope.FaceRectangles, function (imgs, i) {
                //console.log($scope.DetectedFaces[i])
                //Create rectangle for every face
                var divRectangle = document.createElement('div');
                var width = imgs.Width;
                var height = imgs.Height;
                var top = imgs.Top;
                var left = imgs.Left;

                //Style Div
                divRectangle.className = 'divRectangle_box';
                divRectangle.style.width = width + 'px';
                divRectangle.style.height = height + 'px';
                divRectangle.style.position = 'absolute';
                divRectangle.style.top = top + 'px';
                divRectangle.style.left = left + 'px';
                divRectangle.style.zIndex = '999';
                divRectangle.style.border = '1px solid #fff';
                divRectangle.style.margin = '0';
                divRectangle.id = 'divRectangle_' + (i + 1);

                //Generate rectangles
                canvas.append(divRectangle);
            });
        },
        function (error) {
            console.warn("Error: " + error);
        });
    };
})
.factory('FileUploadService', function ($http, $q) {
    var fact = {};
    fact.UploadFile = function (files, uploaderUrl) {
        var formData = new FormData();
        angular.forEach(files, function (f, i) {
            formData.append("file", files[i]);
        });
        var request = $http({
            method: "post",
            url: uploaderUrl,
            data: formData,
            withCredentials: true,
            headers: { 'Content-Type': undefined },
            transformRequest: angular.identity
        });
        return request;
    }
    fact.GetUploadedFile = function (fileUrl) {
        return $http.get(fileUrl);
    }
    return fact;
})

上传图像以检测人脸

从本地文件夹浏览图像以上传并检测人脸。

标记图像中的人脸

检测到的人脸将用白色矩形标记。

列出检测到的人脸及其信息

列出并分离人脸,提供详细的人脸信息。

摘要

您刚刚了解了如何调用 Face API 来检测图像中的人脸。希望这有助于使应用程序更加智能和智慧🙂。

参考文献

© . All rights reserved.