d478: Apple CoreML (incl. CoreML models)

At WWDC2017 Apple presented an “Apple CoreML” – a new foundational machine learning framework used across Apple products, including Siri, Camera, and QuickType. Core ML delivers blazingly fast performance with easy integration of machine learning models enabling you to build apps with intelligent new features using just a few lines of code.

Apple CoreML

CoreML lets you integrate a broad variety of machine learning model types into your app. In addition to supporting extensive deep learning with over 30 layer types, it also supports standard models such as tree ensembles, SVMs, and generalized linear models. Because it’s built on top of low level technologies like Metal and Accelerate, Core ML seamlessly takes advantage of the CPU and GPU to provide maximum performance and efficiency. You can run machine learning models on the device so data doesn’t need to leave the device to be analyzed.

Core ML Computer vision application:

  • face tracking
  • face detection
  • landmarks
  • text detection
  • rectangle detection
  • barcode detection
  • object tracking
  • image registration

Core ML Natural Language Processing API:

  • language identification
  • tokenization
  • lemmatization
  • speech recognition
  • entity recognition

Ready-to-use models:

Places205-GoogLeNet CoreML (Detects the scene of an image from 205 categories such as an airport terminal, bedroom, forest, coast, and more.) – GoogLeNetPlaces.mlmodel

ResNet50 CoreML (Detects the dominant objects present in an image from a set of 1000 categories such as trees, animals, food, vehicles, people, and more.) – Resnet50.mlmodel

Inception v3 CoreML (Detects the dominant objects present in an image from a set of 1000 categories such as trees, animals, food, vehicles, people, and more.)  – Inceptionv3.mlmodel

VGG16 CoreML (Detects the dominant objects present in an image from a set of 1000 categories such as trees, animals, food, vehicles, people, and more.) – VGG16.mlmodel