The LLM is trained to recognize the intent behind user input, even when phrased in diverse, non-technical, or casual ways.
Examples:
βDeploy a token called GoldCoin with 5M supplyβ
deploy_erc20
βWhatβs my balance of GDC?β
get_token_balance
βTransfer 100 DIAI to my friendβ
transfer_tokens
The model is able to:
Understand imperative commands (βDeploy thisβ, βSend thatβ)
Handle indirect requests (βCan I get my MYT balance?β)
Interpret incomplete prompts and ask for clarification
Last updated 8 months ago