# Modules

We have several pre-defined Modules and the corresponding configs

### LLMFunctionModule

```python
            module_type='LLMFunctionModule',
            config=dict(
                type='LLMFunctionConfig',
                function_name="generate_example_sentence",
                function_description='${f"Propose an example sentence containing the given word in {language}"}',
                function_parameters=[
                    dict(
                        name="example_sentence",
                        type="str",
                        description="The example_sentence containing the given word."
                    ),
                    dict(
                        name="translated_example_sentence",
                        type="str",
                        description="The translated example_sentence containing the given word."
                    ),
                ],
                system_prompt='''${f"You are teaching {language} class for {native_language} students. To help them better know how to use these {language} words. You will make example sentences to demonstrate how to use it and then translate the sentence into {native_language} to help students understand the meanings of the sentence. Sentences should preferably be around 20 words."}''',
                user_prompt='''${f"{added_words[word_idx]}. Remember, try you best to let student understand how to use the word through this sentence and then you will translate this sentence into {native_language}"}''',
            ),
```

### MSTTSFunctionModule

```python
        "generate_tts_for_example_sentence_state": TaskState(
            name="generate_tts_for_example_sentence_state",
            next_state="whether_generate_word_speech",
            module_type='MSTTSFunctionModule',
            config=dict(
                name='sen_tts_audio_path',
                input='''${f"{materials_dict.get(words_list[word_idx], {}).get('example_sentence', example_sentence)}"}''',
                language="${f'{language.lower()}'}",
                save_path='${f"demos/duolingo/data/app/curriculum/{language}/audio/sent_{words_list[word_idx]}.mp3"}',
            ),
            outputs={
                "sen_tts_audio_path": ValueType(
                    type='str',
                    value="${sen_tts_audio_path}",
                ),
                "single_meterial": ValueType(
                    type="dict",
                    value="${{**single_meterial, 'sen_tts_audio_path': sen_tts_audio_path}}"
                ),
            },            
        ),
```

### ShuffleFunctionModule

```python
        "shuffle_words": TaskState(
            name="shuffle_words",
            module_type='ShuffleFunctionModule',
            config=dict(
                type='ShuffleFunctionConfig',
                elements='${[words[quiz_idx], wrong_words[0], wrong_words[1], wrong_words[2]]}',
            ),
```

### MSTTSPronounceAssessModule

```python
pronounce_state = TaskState(
    name="pronounce_state",
    module_type="MSTTSPronounceAssessModule",
    inputs={
        "text": Textbox(
            label="text",
        ),
        "audio_path": Textbox(
            label="audio_path",
        )
    },
    config=dict(
        name="assess_result",
        language="japanese",
        text="${text}",
        audio_path="${audio_path}",
    ),
    outputs={
        "result": ValueType(
            type="dict",
            value="${assess_result}"
        )
    }
)
```

### ProdiaImagenModule

```python
state = TaskState(
    name="prodia imagen",
    module_type='ProdiaImagenModule',
    inputs={
        "Prodia_Description": Textbox(),
        "Prodia_ModelName": ValueType(
            type='str',
            value='absolutereality'
        )
    },
    config=dict(
        type='ProdiaImagenConfig',
        prodia_model_name='${Prodia_ModelName}',
        description='${Prodia_Description}',
        enhanced_prompt="",
        negative_prompt="(Character not centered:1.3), badhands, bad anatomy, extra hands, extra fingers,signature, artist name, upper body, (worst quality, lowquality:1.4),(blush:1.2), (jpeg artifacts:1.4), bokeh,blurry, monochrome, dusty sunbeams, trembling.motion lines, motion blur, emphasis lines, text, title,logo, nude, nsfw"
   ),
    outputs={
        "image_path": Textbox(
            value="${image_path_0}"
        )
    },
    title="Prodia",
)
```

### JsonRWFunctionModule

```python
        "login_state": TaskState(
            name="login_state",
            module_type='JsonRWFunctionModule',
            config=dict(
                filename=f"{demo_root}/data/app/user_info.json",
                mode="read",
                var_type="dict",
                var_name="user_info",
            ),
            outputs={
                "user_info": ValueType(
                    type="dict",
                    value="${user_info}"
                )
            },
            next_state="check_user_info",
        ),
```


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://myshell-ai.gitbook.io/neuralautomata/modules.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
