Skip to main content

Example: Daily image skill

We can elevate our skills to the next level by enhancing them with steps that use an LLM (Large Language Model) service, thereby achieving much greater understanding and comprehension. In Ainhoa, we can do this by inserting steps of type Call LLM.

In this example, we will display NASA's image of the day by obtaining the data through an HTTP call and then asking our LLM service to interpret the result and provide us with the image and associated information as a response.

Nasta skillNasta skill

Create the Skill

The first thing we need to do is create the skill. We'll provide the following information in the creation window:

FieldDescriptionValue
TitleSkill name. Preferably in lowercase and without spaces.nasa_picture_of_the_day
DescriptionBrief description of the skill. Must be provided in each of the supported languages.Use this function when the user wants to know NASA's astronomical picture of the day.
TypeType of skill. Search or dialogue.Dialogue
LanguagesList of languages supported by the skill.English and Spanish
TagsList of tags related to the skill.
Access ModeSelect the access mode to the skill. Public access or members only.Public access
MembersList of members who have access to the skill and their associated privileges.
info

You can find more information on how to create a skill at this link: How to Create a Skill.

Set intents

To enable our skill to identify the user's intent, we need to provide it with a list of example phrases that the user might use so that the skill can learn to recognize the intent.

To do this, we'll go to the Bootstrap Intents card in the sidebar and add some phrases. The more diverse phrases we add, the better our bot will be able to identify the intent. We must provide phrases in all the languages ​​we have enabled.

In our case, we have added the following intents:

Nasa skill bootstrap intentsNasa skill bootstrap intents

info

You can find more information about the sidebar in the following link: Sidebar.

Define variables

Sometimes in our integrations with other systems, we'll need to provide additional information such as access keys, API keys, etc. This information can be specified using variables. In this example, we'll make a call to an API provided by NASA to obtain information about the image of the day, but we need to provide an API key.

In the sidebar, we have the Variables card which allows us to store all the variables we need in our skill by simply providing a name and its value.

Nasa skill variablesNasa skill variables

Those variables can be used in our steps with the following syntax: {{variables.name}}.

info

We have more information about variables in the following link: Sidebar.

tip

You can obtain your API Key to use NASA's APIs and review the available APIs in the following link: NASA Open APIs.

Define the steps

Next, we'll define the necessary steps to obtain the information from the API and provide it as a response in the chat. In this example, we'll need to make an HTTP call to fetch the information in JSON format and then provide that response to our LLM service to process and generate a response that we can send to the user.

Therefore, we'll need the following steps:

  1. Send an event to inform that we are retrieving the information.
  2. Make the HTTP call to fetch the information.
  3. Call the LLM service to interpret the results and create a response.
info

All the information about creating the different steps can be found at the following link: Steps.

We will add these steps in the Conversation tab as follows:

1. Information Event

We create a step of type Event to inform the user that we are going to request the information they are asking for. We must provide the event text in all available languages.

Nasa skill step 1Nasa skill step 1

tip

This step is optional; we could proceed directly to make the query for the information, but it's a good practice to keep the user informed about what is happening at each moment.

2. Make the request to the external service

Next, we make the call to the external service that will provide us with the information we need. To do this, we add a step of type Call API and specify that the request is of type GET. We provide the URL of the service and add our variable that we saved earlier, which contains the necessary API Key. In this case, we don't need to add any additional headers.

https://api.nasa.gov/planetary/apod?api_key={{variables.apikey}}

Lastly, we specify that the result of the request should be stored in an entity named response.

Nasa skill step 2Nasa skill step 2

3. Calling the LLM service to generate the response

This service provides a JSON response containing different fields with information associated with the image such as title, description, copyright, etc.

In our example, we will use an LLM service to process this JSON response and extract the information we want to show to the user. To do this, we need to add a step of type Call LLM.

In this step, we select our LLM service type that we want to use, which must be previously registered in Ainhoa. We set the maximum number of tokens we want to use in the response and the temperature.

The response from the previous step is available using the following syntax: {{@response:body.value}}. With this response, we need to generate a prompt to instruct our LLM service to interpret the JSON response and provide us with another response in text format that we can send to the user.

We will use the following prompt:

Use the following JSON to respond.
Mention the title using the title field.
Comment on the image description using the information from the explanation field.
Provide the image by using the following: ![Image Title](image URL).
Also mention the copyright if it appears in the JSON.
Provide the link to the high-definition photo using the hdurl field.

JSON:
{{@response:body.value}}

As you can see, we provide clear and simple instructions and also attach the JSON using the entity we saved in the previous step.

Finally, we save the response provided by the LLM service in a new entity named llmresponse.

Now let's add a new step to send the response from the LLM service to the user. We create a step of type Say and directly write the response we saved earlier as follows: {{@llmresponse.value}}.

Nasa skill step 3 and 4Nasa skill step 3 and 4

info

All the information about the "Call LLM" step is available at the following link: Call LLM.

Make sure you have saved your changes by clicking on the Save button.

Test the skill

Now, we can test our skill by using the Test skill tab.

info

You can find all the information about the Test skill tab in the following link: Test skill.

We can also test the skill by assigning it to one of our bots and using the Test bot tab or directly using our bot's chat.

Ready! We now have a new skill for our bots that shows us the image of the day provided by NASA. 🪐

Nasta skill testNasta skill test