Skip to main content

Selphi Widget iOS

1. What is the widget?

FacePhi Selphi iOS Widget is a tool designed using Objective C. With it, you will be able to carry out most of the functionalities that FacePhi offers. It is a tool designed to simplify the integration of face recognition applications technology.

Tools available in this widget are:

  • Internal management of the camera and the resolutions
  • Assistant in the authentication process
  • The extraction of facial patterns is transparent for the integrator

1.1. Minimum requirements

  • iOS Deployment target: 13

2. How to integrate the widget?

2.1. Required libraries and configuration

In order to add the required libraries, from the "Build Phases" tab, you should add the following libraries in the "Embedded binaries" section:

  • Extractor library: FPBExtractoriOS.framework
  • Selphi Widget library: FPhiWidgetSelphi.framework
  • Core Widget library: FPhiWidgetCore.framework
  • ZipZap library: ZipZap.xcframework
  • Native library: libc++.tbd
  • IAD version: Add additional IAD libraries FPHILicenseManager.XCFramework, IDLiveFaceCamera.XCFramework, IDLiveFaceDetection.XCFramework and IDLiveFaceIAD.XCFramework.

In the "Copy bundle resources" section it is required to add the widget resources file fphi-widget-resources-SelphiLive-1.2.zip.

Since iOS 10.0, if the application uses the camera, it is required to add a usage description. To do that, the file info.plist must be modified adding the description as the value for the key NSCameraUsageDescription:

    <key>NSCameraUsageDescription</key>
<string>Description</string>

2.2. Integration steps

To integrate the widget into a controller, once you have available the required libraries, you just need to carry out the following actions:

  • Import the headers file: #import "FPhiWidgetSelphi/FPhiWidgetSelphi.h"
  • Declare a variable for the widget, type of FPhiWidget: @property FPhiWidget *widget;

Instancing the widget:

    // Save memory for the class and call the init method. (Constructor)
NSError *error = nil;

NSBundle *bundle = [NSBundle bundleForClass:[AddUserViewController class]];

_widget = [[SelphiWidgetalloc] initWithFrontCameraIfAvailable:true
resources:[bundlepathForResource:@"fphi-widget-resources-SelphiLive-1.2" ofType:@"zip"]
delegate:self error:&error];

// Evaluate problems regarding camera permissions and other situations
if (error != nil) {
switch (error.code) {

case FWEUnknown:
NSLog(@"Widget - construction error. Unknown error");
break;

case FWECameraPermission:
NSLog(@"Widget - construction error. Camera permission denied");
break;
}

return;
}

// Initialize the camera and start the extraction cycle.
[_widget StartExtraction];

// Show the widget view and hide the actual view.
[self presentViewController: widget animated:true completion:nil];

During the constructor call, it is specified in its first parameter which camera is going to be used, being true the value to use the front camera and false the value to use the rear camera. The second parameter sets the class that implements the protocol events of the class. In the third parameter, error is used to indicate a problem during the widget creation, for example, a problem regarding camera permissions.


3. Set up the widget

You can use the following properties to set up the widget:

3.1. ResourcesPath

It sets the route of the resources file that the widget will use for its graphical configuration.

3.2. Properties

The following properties are available to configure the widget:

3.2.1. livenessMode

It sets the widget liveness mode. Permitted values are:

  • LMLivenessNone: Indicates that must not be activated the photo detection mode in the authentication processes.
  • LMLivenessMove: Indicates that active liveness movement mode must be activated in the authentication processes.
  • LMLivenessPassive: Indicates that the liveness will be performed at server side, sending the "BestImage" or the correspondant "tokenTemplateRaw".

3.2.2. stabilizationMode

Sets a stabilization mode prior to any authentication process in the widget. With this mode, the widget is forced not to start any process if the user is not facing forward and without moving their head.

3.2.3. qrMode

Indicates if the QR read wants or not to be activated precious the authentication process.

3.2.4. userTags

It sets 4 bytes with data that may be configured by the main application and will be incorporated to the templates generated by the extractor.

3.2.5. locale

It forces the widget to use the language configuration indicated by locale parameter. This parameter accepts a language code and local identification code. If the file of the widget resources doesn´t have a location for the "locale" selecting its configuration would use the language by default.

3.2.6. logImages

Activate or not the return of the images list which have been captured during the execution of the extraction process. If the input parameter is true will return the list of processed images. In another case, it will get back an empty list.

3.2.7. tutorialFlag

Sets the tutorial view before any authentication process. Once the tutorial is completed the widget will continue as usual.

3.2.8. debugMode

It sets the debugging mode of the widget.

3.2.9. videoFilename

Sets the absolute path of the file name where a video of the authentication process will be recorded. The application is responsible for requesting the necessary permissions to the phone in case that path requires additional permissions. By default, the widget will not perform any recording process unless a file path is specified using this property.

3.2.10. showAfterCapture

Enables a preview of the captured selfie prompting the user to accept it or repeat it.

3.2.11 extractionDuration

Sets the amount of time the widget will keep extracting user's facial features. The allowed values are:

  • FPhiWidgetExtractionDurationShort: 1 second extraction duration. (Default value)
  • FPhiWidgetExtractionDurationMedium: 2 seconds extraction duration.
  • FPhiWidgetExtractionDurationLong: 3 seconds extraction duration.

3.2.12 preferredOrientation

Sets the allowed orientations the widget will permit.

The allowed values are:

  • FPhiWidgetOrientationFullSensor: All orientations allowed.
  • FPhiWidgetOrientationFullSensorNoReverse: All orientations but reverse portrait allowed.
  • FPhiWidgetOrientationPortrait: Portrait allowed. (Default)
  • FPhiWidgetOrientationLandscapeLeft: Landscape left allowed.
  • FPhiWidgetOrientationLandscapeRight: Landscape right allowed.
  • FPhiWidgetOrientationPortraitReverse: Reverse portrait allowed.
  • FPhiWidgetOrientationPortraitSensor: Portrait and reverse portrait allowed.
  • FPhiWidgetOrientationLandscapeSensor: Landscape left and landscape right allowed.
  • FPhiWidgetOrientationLocked: All orientations allowed but the widget won't rotate dynamically.

3.2.13 cameraFlash

Enable/disable camera flash if available.

3.2.14 jpgQuality

Set jpg compression. Default value: 0.92f.

3.3. Methods

The following methods are available in the widget:

3.3.0. setLicense

Sets the contents of the license that will be needed for some widget features.

3.3.1. generateTemplateRawFromUIImage:(UIImage *)img

Generates a templateRaw from a native Android image. This method is static so it doesn´t require launching the widget to perform this operation.

3.3.2. generateTemplateRawFromNSData:(NSData *)img

Generates a templateRaw from a byte array. This array must contain the representation of the image in jpg or png format. This method is static so it doesn´t require launching the widget to perform this operation.

3.3.3 widgetVersion

Returns the widget's actual version in string format. This method is static so it doesn´t require launching the widget to perform this operation.


4. Customize the widget

The widget allows you to customize texts, images, font and colours. The customization is made by the .zip file provided with the widget. This zip is composed of a file titled widget.xml that contains the definition of all widget screens. Each with a series of elements which allow to make the customization. The zip file also contains a folder with graphical resources and another folder with the translation of the texts.

4.1. Basic description

4.1.1. Text customization

The customization of texts is done editing the texts of the translation files inside the resources folder.

    /strings/strings.es.xml
/strings/strings.xml

4.1.2. Images customization

The customization of images which the widget use it must be the images in the .zip of resources. In the zip there are 3 folders:

    /resources/163dpi
/resources/326dpi
/resources/489dpi

These folders represent at different screen densities, it may be generated as many density folders as desired. In these folders are the versions of the images for each resolution.

It is necessary to add the images in all folders, once determined the optimal resolution for the device, the widget only loads the images of the folder with the selected resolution. The images are referenced from the file widget.xml.

4.1.3. Colour customization

The customization of the colour of the buttons is done from the file widget.xml. You can customize any colour of any graphical element that appears in the widget. It is simply enough to modify the colour of the property desired.

4.1.4. Font customization

The customization of the font must be placed in the folder /resources/163dpi and it can be referenced from the file widget.xml. To change the font of text elements is enough modifying the font property and put the name of the file.

In the following section there is more information about the content of the resources bundle and the mode to modify.

4.2. Advanced description

4.2.1. Widget.xml

This file contains the definition of the properties that can be configured in the process. It is divided by navigation screens and inside of each screen tag are found all the properties that can be modified.

It is possible, via code, to select the location by the local property. This parameter accepts a string with the language code desired (for example,”es” or ”es_ES”).

4.2.2. String folder

This folder contains string.xml file for each translation that may support. The name must be formed as follows:

    strings.(language).xml

Being (language) the language code. For example, strings.es.xml would be the translation in Spanish, strings.en.xml the translation in English, strings.es_ES.xml the Spanish from Spain or strings.es_AR.xml the Spanish from Argentina.

The language can be forced or let the widget select it based on the device configuration. When deciding the language to apply the following sequence is performed:

  • Searching by location code (for example, "es_AR").
  • If there isn´t any coincidence, it is possible to search for the generical language ("es").
  • If there isn´t any result, it is possible to use the language by default.

4.2.3. Resources folder

It contains the folders with all the necessary resources to be modified, divided in densities. It is mandatory to generate the images of all densities, since the widget is expecting to find them in the corresponding folder to the density of the device. It also can create new folders with the density expected.

4.2.4. BACKGROUND element

The background element is composed of 4 segments which can be given colour independently:

  • top: defines the background colour of the segment or upper panel.
  • middle_top: defines the background colour of the segment or panel where the image of the camera is located.
  • middle_bottom: defines the background colour of the segment or panel where the text is located.
  • bottom: defines the background colour of the segment or the lower panel.

It also can be configured certain properties which is used only in specific screens. Below, we list them making reference to the screens which are used:

  • pagination_separator (RegistrationTips, FaceMovementTips): defines the colour separation between the lower panel and the panel under the camera.
  • mirror_border_color (RegistrationTips, FaceMovementTips): defines the colour of the border of the circle which surrounds the image of the camera of the video of the registration tips. To this element also is called mirror.
  • mirror border width (RegistrationTips, FaceMovementTips): defines the width of the edge of the circle which surrounds the image of the camera or the video of the registration tips. If we don´t want to show the edge, we have to assign a value of 0.0 to this property.
  • mirror_mist_color (StartExtractor): Defines the colour of the centre circle in the previous screen to the extraction. This colour must have always a transparency value, we should let show the image of the camera for the user can place properly before to start with the extraction. The colour format when it is included a transparency value is RGBA ( the alpha value will be indicated with the last byte)
  • mirror_color (Results): defines the background colour of the circle that show the results of the registration process.

4.2.5. BUTTON element

  • background: defines the background colour of the button
  • decorator: defines the colour of the shadow of the button
  • foreground: defines the colour of the font of the button in case the content is a text
  • content_type: defines the type of the content of the button. There are 2 different types:
    • RESOURCE_ID: Content must contain the name of a file in the resources bundle
    • TEXT_ID: Content must contain the identificator of a literal of the translations file in the resources bundle
  • content: defines the content of the button, image or text
  • font: defines the type of font used if the content of the button is text
  • font_size: defines the size of the font if the content of the button is text

4.2.6. TEXT element

The text elements are used to define the graphical aspect of the texts of each widget screens. These are the properties which can be modified:

  • color: defines the text colour.
  • font: defines the type of the font used to show the text.
  • font_size: defines the size of the font.

The results screen of the registration the two texts that define the registration quality has forced their colour to the colour of the bar that indicates the punctuation.

4.2.7. IMAGE element

  • value: defines the name of the file that contains the image to show.

The image elements only have the property that defines the file where the physical image is located in the resources bundle. The images are obtained of the bundle searching in the appropriate folder according the density of the device.

4.2.8. VIDEO element

  • value: defines the name of the file that contains the video to show.

The video elements only have the property that defines the file where the physical video is located in the resources bundle.


5. Widget messages

The communication from the widget to the application once the facial characteristics extraction is finished is done through events. In order to indicate which class will implement this method (implements the protocol), you should indicate it in the second parameter of the init method (in this example, self):

    _widget = [[SelphiWidgetalloc] initWithFrontCameraIfAvailable:true 
resources:[bundlepathForResource:@"fphi-widget-resources-SelphiLive-1.2" ofType:@"zip"]
delegate:self error:&error];

In this case, self indicates that it will be implemented in the same class that the call is done.

5.1. Protocol events

The iOS Widget provided by FacePhi is responsible for performing the extraction of the facial features of the user, and thus will generate a facial template representative of the user.

5.1.1. ExtractionFinished event

It is executed when an extraction process ends.

    (void) ExtractionFinished {
// Elements available during the extraction
FPhiWidgetExtractionData *results = _widget.results;
FPBExtractionResult *result = results.result;

// Template obtained during the extraction
NSData *templateRaw = [result getTemplateRaw];

// Best Image of the process
UIImage *bestImage = results.bestImage.image;

// Best Image of the process cropped at the face coordinates
UIImage *bestImageCropped = results.bestImageCropped.image;
}

the results object contains these fields

  • templateRaw: It retrieves the generated raw template after the extraction process.
  • images: If logImages flag is set, it retrieves the images obtained during the extraction process.The images are retrieved from highest to lowest by its "facial score" so the best of the image of the extraction process is found in the 0 position of array.
  • bestImage: Returns the best image extracted from the authentication process. This image is the original size image taken from the camera.
  • bestImageCropped: Returns a cropped image centered on the user's face. This image is obtained from the "bestImage". This is the image that should be used as a characteristic image of the user who carried out the process as an ‘avatar’.
  • livenessDiagnostic: It retrieves the final diagnostic of the liveness process.
  • qrData: It retrieves the data of QR codes captured.
  • iadBundle: It retrieves the encrypted data from the injection attack detection analysis.

5.1.2. ExtractionFailed event

This event is executed when any problem happened while extracting.

    (void)ExtractionFailed:(NSError *) error {
}

5.1.3. ExtractionCancelled event

This event is executed when the user cancels the process manually by pressing on ‘Cancel’.

    (void)ExtractionCancelled {
}

5.1.4. ExtractionTimeout event

This event is executed when a maximum allowed time is reached without detecting a face.

    (void)ExtractionTimeout {
}

5.1.5. onEvent event

This event allows our widget to send information to the main application about important events that occur during the execution.

This method receives as parameters the time of the event, encoded as UnixTime in milliseconds, the type of event and the information of the event associated with this particular event. There are mainly 3 types of events:

  • Events about view changes or widget status changes.
  • User events such as button clicks or swipe movements.
  • Events about the process that is currently in progress. These events can be about errors not detecting a face, about wrong movements or even not following the indications about the current process.

With these events we can comunicate important data to analyze user behavior while using our technology

    (void)onEvent:(NSDate *)time type:(NSString *)type info:(NSString *)info {
}