Ready-To-Use AI Models | Zoho Creator Help

Ready-To-Use AI Models

Info
AI Models have undergone a major revamp and is now renamed as AI Modeler that lets you build, train, and publish models to be used across your apps. If you've created models prior to this revamp, click here to know more. 

Notes
Note:
AI calls are consumed each time an AI model runs within your Creator application.
  1. For AI fields such as OCR, Object Detection, Keyword Extraction, or Sentiment Analysis, an AI call is triggered when the input is provided in the required source fields. This AI call is deducted from the AI calls limit available in your Creator plan.
  2. If the source field is modified, either before or after the output is generated, the model runs again using the updated input. This triggers an additional AI call, which is also counted against your AI calls limit.
You can monitor your remaining AI calls limit from the Billing section. However, the AI agent is exempt from these restrictions, and no AI usage limits apply to it.

Ready-to-use AI models let you use artificial intelligence in your Zoho Creator application without necessarily having any prior machine learning skills. Many of the most commonly used AI models for business scenarios are already built, fed with data, trained, and made readily available for you to deploy in your applications. For any AI model, continuous retraining is a key for it not to drift and to produce precise results. This part is also being taken care of and hence you need not retrain from your end.


Currently, there are four ready-to-use AI models available which can help you transform unstructured data into structured data that's machine-readable:




Info
  1. The model outcomes may not be always accurate, which is also the case with any AI. We are continuously refining the model to improve accuracy.
  2. The model outcomes are dynamic. The same input can produce different outcomes at different times based on how much the machine has learned. This implies that we are performing model-specific optimizations from time to time and the model is learning continuously to become better suited to its purpose.

Besides the available ready-to-use AI models, Creator also provides facility to build custom AI models to satisfy your unique needs. Learn more

Deploying Ready-To-Use AI models

Keyword Extraction 

The keyword extraction model extracts key elements like words and phrases from the input, which is an unstructured data. Ideally, all nouns in a statement will be considered as keywords.

Business use case: You can quickly find how many of your customers are requesting a price reduction by interpreting the keywords from their reviews.


Notes
 Note
  1. The keyword extraction AI model works only on text data type. 
  2. This model can extract data upto 64KB. 
  3. Currently, only English language is supported.

To use a keyword extraction model in your application:


  1. Navigate to the Microservices tab and click the +Create New button at the top-right corner of the page.


  1. Hover over the AI Models card and click Create.


  1. Under Deploy - Ready to use Models, select Keyword Extraction.


  1. Choose the application and its form in which the keyword extraction model will be deployed. You will be redirected to the edit mode of the selected application.


  1. Select the required source field in the popup that appears. This is the field that holds the input text upon which the keyword extraction will be applied.
     


  1. Click the Add Field button. A keyword extraction field will be added to the selected application and on user input of the text in the selected source field, the extracted values will be auto-populated in it. 
Notes
Note
  1. The keyword extraction field will be disabled by default and you cannot modify its value.
  2. Using Deluge, the values from a keyword extraction field can be fetched but cannot be updated.
  3. On user input action workflow cannot be configured for this field. Instead, you can configure the same for the selected source field. 

Sentiment Analysis

The sentiment analysis model deals with understanding the attitude of the input statement. 
Business use case: When a customer feedback says "This session is great," this model predicts the sentiment of the feedback as "Positive". 
However, as mentioned previously, the machine continuously learns and the results may not be accurate all the time. For example, the machine predicts the input "session was good" to be "Neutral" and the input "session was great" to be "Positive." From this, we can assume the machine requires words with extreme emotion to side to a sentiment. As a contradiction, the machine predicts " the sessions were good" as "Positive". This could mean that since more than one session was good, the statement is a positive note. Therefore, a lot of factors affect the model's outcome, and it can be interpreted based on the individuals' judgment.

Info
Note : 
  1. The sentiment analysis AI model works only on input of text data type. 
  2. The attitudes that can be detected by this model are: negative, positive, or neutral 
  3. Currently, only English language is supported.


To use a sentiment analysis model in your application:


  1. Navigate to the Microservices tab and click the +Create New button at the top-right corner of the page.


  1. Hover over the AI Models card and click Create.


  1. Under Deploy - Ready to use Models, select Sentiment Analysis.


  1. Choose the application and its form in which the sentiment analysis model will be deployed. You will be redirected to the edit mode of the selected application.


  1. Select the required source field in the popup that appears. This is the field that holds the input text upon which the sentiment analysis will be applied. 
Notes
Note
  1. The supported text type fields on which sentiment analysis model can be applied are single line field and multi line field. Only the single line and multi line fields available in the form will be listed for source field selection. 
  2. If none of the supported field types are available in the form, you will need to first create them in order to deploy the sentiment analysis model. 
  3. Only the fields that is present in the parent form where the model needs to be deployed can be selected as a source field. A subform acts independent of its parent form. This means subform fields can be a source field only when the model is placed within the subform.



  1. Click the Add Field button. A sentiment analysis field will be added to the selected application. Upon user input of the text in the selected source field, the detected sentiment will be auto-populated in it. 
Notes


Note: 
  1. The sentiment analysis field will be disabled by default and you cannot modify its value.
  2. Using Deluge, the values from a sentiment analysis field can be fetched but cannot be updated.
  3. On user input action workflow cannot be configured for this field. Instead, you can configure the same for the selected source field. 

Object Detection

The object detection model detects elements in the input image, classifies them into predefined categories, and returns the list of all detected elements. 


Business use case: In an order management application, on uploading an image of an apple, you can identify and keep an account of the number of items in stock.


Info
The ready-to-use AI model can be directly deployed in your applications. This is pre-trained with a specific detailed dataset. You can also customize your object detection model to suit your unique needs using the custom object detection model.

Notes
Note: 
  1. The object detection AI model works only on file data type. 
  2. The supported file formats are: .JPEG, .PNG, .BMP, .TIF.
  3. The supported categories are: person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, fire hydrant, stop sign, parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove, skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donut, cake, chair, couch, potted plant, bed, dining table, toilet, TV, laptop, mouse, remote, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair drier, toothbrush, bee, butterfly, camera, glasses, guitar, lavender, penguin, strawberry, suit, and sunflower.


To use a object detection model in your application:


1. Navigate to the Microservices tab and click the +Create New button at the top-right corner of the page.

2. Hover over the AI Models card and click Create.
3. Under Deploy - Ready to use Models, select Object Detection (Pre-Built).

4. Choose the application and its form in which the object detection model will be deployed. You will be redirected to the edit mode of the selected application.

5. Select the required source field in the popup that appears. This is the field that holds the input image upon which the object detection will be applied. 

Notes
Note
Currently, only the image field is supported for source field. Therefore, only image type fields available in the form will be listed for source field selection. 
If there is no image field available in the form, you will first need to create it in order to deploy the object detection model. 
Only the fields that is present in the parent form where the model needs to be deployed can be selected as a source field. Subform acts independent of its parent form. This is why subform fields can be a source field only when the model is placed within the subform.

 

6. Click the Next button, then click Add Field. An object detection field will be added to the selected application when a user uploads an image to the selected source field, the label of detected object will be auto-populated in it. 


Notes
Note: 
  1. This field will be disabled by default and you cannot modify its value.
  2. Using Deluge, the values from an object detection field can be fetched but cannot be updated.
  3. On user input action workflow cannot be configured for this field. Instead, you can configure the same for the selected source field. 
  4. If the uploaded input image contains objects that are not in the predefined set, the model may return any closest value from the supported category. 
  5. Objects are detected individually. Therefore, when an object is detected more than once in an image, the model returns the detected value twice in the output list instead of returning the plural form. For example, if an image contains two dogs, the model's result will be: dog, dog 

Optical Character Recognition (OCR)

The OCR model recognizes texts in images and PDFs and convert them into a digital form using image and textual processing. 
Business use case: Upon uploading a business card, you can quickly and automatically fetch address, phone numbers, and other information with the help of OCR.


Info
The ready-to-use AI model can be directly deployed in your applications. This extracts all detected text from the inputtraining data. You can also extract a selective portion of an image or a PDF by using custom OCR model.

Notes
Note: 
  1. The OCR AI model works only on file data type. 
  2. This model can read upto 64KB of text.
  3. The supported file formats for input images are: .JPEG, .PNG, .BMP, .TIF. 
  4. You must upload at least five PDFs of similar layout that doesn’t exceed 5 MB for text extraction by the model.
  5. This model can also detect handwriting but it may not be exactly accurate. It is advised to upload images and PDFs with text in printed format. 
  6. Currently, only English language is supported.


To use a OCR model in your application:


1. Navigate to the Microservices tab and click the +Create New button at the top-right corner of the page.

2. Hover over the AI Models card and click Create.

3. Under Deploy - Ready to use Models, select OCR (Pre-Built).

4. Choose the application and its form in which the OCR will be deployed and click Use Model. You will be redirected to the edit mode of the selected application.

5. Select the required source field in the popup that appears. This is the field that holds the input image or PDF upon which the OCR will be applied. 
Notes
Note:
  1. Currently, image and file upload fields are supported for source field. Therefore, only the image and file upload type fields available in the form will be listed for source field selection.
  2.  If there is no image or file upload field available in the form, you will need to first create it in order to deploy the object detection model.
  3.  Only the fields that is present in the parent form where the model needs to be deployed can be selected as a source field. Subform acts independent of its parent form. This means that subform fields can be a source field only when the model is placed within the subform.

6. Click the Next button, then click Add Field. An OCR field will be added to the selected application and when a user uploads an image or a PDF to the selected source field, the extracted text will be auto-populated in it. 


Notes
Note
  1. This field will be disabled by default and you cannot modify its value.
  2. Using Deluge, the values from an OCR field can be fetched but cannot be updated.
  3. On user input action workflow cannot be configured for this field. Instead, you can configure the same for the selected source field.