Web app

Features of the webapp, they are developed everyday so check back on this page since this list is growing

Chat - Create an conversational experience on contained datasets.

With each chat you will see where the AI found the information in cards that show the exact data that was retrieved.

MultiModel - Choose a model

Roles / Agents - Custom Per dataset you can configure your own roles or choose from default and shared Roles and Prompt fragments.

  • Search on a Dataset: Users can search for specific information within a dataset, making it easier to find relevant details and insights. The search algorithm is the same as that used for context retrieval, ensuring accurate and reliable results.

  • Share a Chat Conversation: Users can share a chat conversation with others, enabling them to collaborate on ideas, ask for input, or simply share interesting discussions.

  • Private Chat History: Each user's chat history is kept separate and confidential, allowing them to maintain privacy and security in their conversations.

  • Individual Accounts: These are separate accounts for each user of the system, allowing them to have their own private chat histories, datasets, and preferences.

  • Private Datasets: Users can create and maintain their own private datasets, which are not visible or accessible to other users. These datasets can be used for personalized chat experiences and machine learning models.

  • Instance Wide Shared Datasets: In addition to private datasets, there may also be datasets that are shared across the entire instance of the system. These datasets can be accessed and used by all users, enabling collaboration and knowledge sharing.

  • Roles (AKA Custom Prompt Persona Injection) Per Dataset: Users can assign different roles or personas to themselves when chatting with a particular dataset. This allows for greater flexibility and customization in how they interact with the data.

  • Chat with Context Retriever: Users can engage in a conversation with a context retriever, which can provide relevant information and insights based on the contents of a particular dataset.

  • Ingestion Pipeline With:

    • Web Crawler: The system includes a web crawler that can automatically scrape and import data from websites, making it easy to incorporate publicly available information into a dataset.

    • File Import: Users can manually import a variety of file types into the system, enabling them to work with structured data such as spreadsheets, as well as unstructured data such as documents and emails. The maximum file upload size can be configured to ensure optimal performance.

    • Async Processing: To prevent resource saturation and ensure smooth operation of the system, data imports and other resource-intensive tasks are performed asynchronously. Flow control mechanisms prevent queue backups and maintain system stability.

    • Spatial Positioning: The system includes spatial positioning capabilities, enabling users to work with geographic and spatial data.

    • Langchain Loaders and Custom Pipelines Compatibility: The system is compatible with Langchain loaders and custom pipelines, enabling users to incorporate specialized machine learning models and data processing workflows into their work.

  • Computation Times and Tokens Usage Logs: To help users monitor system performance and resource utilization, the system provides detailed logs of computation times and token usage. This information can be used to optimize machine learning models and fine-tune system settings for improved efficiency and effectiveness.

  • chat experience enchaced with teh possibility to choose between different models/;

    • Mixstral 8x7B

    • Mixstral 8x22B

    • WizardLM-2-822B

    • dbrx (Databricks)

    • Qwen2-72B

  • MoA added to the chat experience, When you chat with teh context document. You now have the option to choose MoA which stands for Model of Agents. This is an Multimodel Agent that will create 3 experts all with a different model and assemble the ansers into the final answer to your question. It ensures a better quality and diversity of response. It also takes A bit longer since it is performing more computation.

Last updated

Logo

Copyright CitizenLab SL @2024