The need for protection of personal information privacy and the distrust of transferring internal information to a third party make it increasingly difficult to enhance the accuracy of deep learning in large datasets. ari uses federal learning to encrypt the learning model and the parameters on the blockchain. The data of each zone does not need to be trained, ensuring privacy and data security, and it conforms to the personal information protection laws of various countries. The characteristic parameters after learning are returned to the model’s central system. The shared AI model of each zone does not belong to a single unit or organization as well.
Using blockchain to encrypt is different from using Certificate Encryption (cA). The organization that signs the certificate has the root visa (Root CA), which also gives the right to use the AI model. This feature is time effective and there is no open guarantee mechanism for storing the root visa.
Graphics Processing Units (GPUs) can significantly accelerate numeric calculations for AI model development and the inferencing time. That is, GPUs can potentially speed up the AI development.
Under the scenario in figure 1 (without GPU sharing), a single GPU needs lots of time to finish a complex task. But, GPU2 is free. If GPU2 can help GPU1 and GPU3 with their tasks, time could be saved.
Let’s look at another scenario in figure2, all the GPUs are dealing with the same computing task. However, they are not fully loaded. In this case, GPUs have the ability to handle more tasks.
With GPU sharing, we can increase the throughput per unit time after task scheduling.
When developing an AI model, we can use the GUI to drag and drop all the functions, algorithms, actions, or other process you need. Or, you can even make them an automated pipeline.
Take a facial recognition system in school for example. Parents will receive a message when students recognized by the system when they arrived at the school; while a security will be alerted when the camera captured an unrecognized person.
The facial recognition process includes various steps such as face detection (localization), feature extraction, face identification and verification.
During face detection, when the system receives the RGB images, the images will be first processed with HOG (histogram of gradients.) The SVM (support vector machine) algorithm will then implemented to find where the face is located.
Here the SVM model is pre-trained with LFW (Labeled Faces in the Wild) and some images without faces. Noted that we used PCA (Principal component analysis) to separate eigen-face before feed the images into SVM.
With the pipeline, you can do all the tasks mentioned previously by simple drag-and-drop function. That is the say, you can build and manage your AI development environment easily and efficiently. Moreover, you can pick a best performance algorithm under different container image with different AI models.
CDSS is one of the target products of the ari AI development platform. It utilizes AI into practice SaaS applications. During the development of the AI model, CDSS provides data analysis and statistics. If the statistical data is imbalanced, the artificial intelligence learning will be inaccurate. However, performing data cleaning beforehand will greatly affect the results. The CDSS of ari is an integration of AI model training, data analysis, diagnosis recommendations, and treatment images. It evaluates and classifies the images to determine if the disease is positive or negative, and estimates the probability of getting the disease.