diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md new file mode 100644 index 00000000..d804e579 --- /dev/null +++ b/.github/copilot-instructions.md @@ -0,0 +1,87 @@ +# Copilot instructions + +This document provides instructions to the copilot AI assistants for dvorka/mindforger project. + +## General instructions + +- Always write beautiful, readable, and maintainable code. +- Handle errors, exceptions, and corner cases. +- Prefer clarity over cleverness. Optimize only when needed and measured. +- Always KISS - keep changes small and focused. +- Always DRY the code - do not duplicate code; create reusable classes, functions and methods; do not repeat yourself. +- Always add tests alongside code changes. + +## Functional architecture instructions + +- Contribute to this repository which is the thinking notebook and Markdown IDE desktop. + +## Technology stack instructions + +- The project is written in C++. +- The code is portable so that it can be compiled on Linux, Windows and macOS. +- The application is written using Qt framework. +- Always use C++ 11 and avoid newer language features. +- Always start code comments with lowercase letter. +- Always use `MF_DEBUG` to write debugging output. +- Always use `_WIN32` to identify Windows specific code. +- Always use `__APPLE__` to identify macOS specific code. + +## Code quality instructions + +- Use code formatting style as used in mindforger.cpp - comments, indentatation, parenthesis, namespaces, directives, ... + +## Repository conventions + +- The project is structured to the library which is then used by Qt application. +- Library dependencies are stored in `deps/` - each have its own build style. +- Library code lives under `lib/`. +- Qt application code lives under `app/`. +- Tests code lives under `lib/tests/`. +- Licenses are stored in `licenses/`. + +## Test instructions + +- Always use `gtest` (Google test) framework to write the test. +- Each test (function) is structured into 3 sections: `// GIVEN`, `// WHEN`, and `// THEN`. `GIVEN` section prepares the data, `WHEN` section calls the function, and `THEN` section prints results, asserts results and checks results. +- `THEN` section of the test must have at least one assert statement. +- Keep tests deterministic. +- Always use text to indicate success/failure/progress like DONE, ERROR or WIP - never use (unicode) characters like ✓ or ✗. +- Always print or log intermediate values only when they aid debugging. +- Always make sure that tests which test new feature or fix are in green. + +## Build instructions + +- Qt is used to describe the project structure using `*.pro` files and to build it using `qmake`. +- Makefile to build, test, run and package the project is located in `build/Makefile`. + - Always use `make help` to find out what are the targets. + +## Documentation instructions + +- Markdown user documentation sources live under `doc/`. +- Doxygen documentation can be build using a target defined in `build/Makefile` - use details from that target. +- Always use `build/Makefile` targets to build the Doxygen documentation. + +## Continuous Integration instructions + +- GitHub Actions is used as CI for Linux and macOS. +- AppVeyor is used as CI for Windows. +- GitHub Actions CI configuration is stored under `.github/workflows/`. + +## Security and secrets instructions + +- Always use environment variables and secret stores. +- Always use GitHub actions secrets. +- Never commit secrets, credentials or sensitive data. +- Validate, sanitize and anonymize all external inputs. +- Always run security-focused checks. +- Always add new license to `licenses/` when you add new direct dependency. + +## Release versioning instructions + +- Always use semantic versioning: MAJOR.MINOR.PATCH. +- Note that releases has Git tag like `vMAJOR.MINOR.PATCH`. +- Note that releases are being developed in `dev-MAJOR.MINOR.PATCH` branches. +- Note that Git branches use naming convention for fix branch (`bug-NUMBER/DESCRIPTION`), features and enhancements (`feat-NUMBER/DESCRIPTION`) and documentation (`doc-NUMBER/DESCRIPTION`). +- Note that Conventional commits (conventionalcommits.org) are used for the commit messages. +- Always update change log stored in `Changelog` whenever you do a fix, change, or enhancement. +- Always make sure that the version is consistent in `app_info.h`, `Makefile`, `debian/debian-make-deb.sh`, `debian/changelog`, `macos/env.h`, `snap/snapcraft.yaml`, `tarball/tarball-build.sh` and `ubuntu/debian/changelog` - `app_info.h.py` is the one and only authoritative version source. diff --git a/app/app.pro b/app/app.pro index 2bcd26da..4a9eac14 100644 --- a/app/app.pro +++ b/app/app.pro @@ -286,6 +286,9 @@ HEADERS += \ src/qt/dialogs/rm_library_dialog.h \ src/qt/dialogs/run_tool_dialog.h \ src/qt/dialogs/wingman_dialog.h \ + src/qt/dialogs/add_llm_provider_dialog.h \ + src/qt/dialogs/openai_config_dialog.h \ + src/qt/dialogs/ollama_config_dialog.h \ src/qt/dialogs/sync_library_dialog.h \ src/qt/dialogs/terminal_dialog.h \ src/qt/kanban_column_model.h \ @@ -407,6 +410,9 @@ SOURCES += \ src/qt/dialogs/rm_library_dialog.cpp \ src/qt/dialogs/run_tool_dialog.cpp \ src/qt/dialogs/wingman_dialog.cpp \ + src/qt/dialogs/add_llm_provider_dialog.cpp \ + src/qt/dialogs/openai_config_dialog.cpp \ + src/qt/dialogs/ollama_config_dialog.cpp \ src/qt/dialogs/sync_library_dialog.cpp \ src/qt/dialogs/terminal_dialog.cpp \ src/qt/kanban_column_model.cpp \ diff --git a/app/src/qt/dialogs/add_llm_provider_dialog.cpp b/app/src/qt/dialogs/add_llm_provider_dialog.cpp new file mode 100644 index 00000000..57e8d43f --- /dev/null +++ b/app/src/qt/dialogs/add_llm_provider_dialog.cpp @@ -0,0 +1,83 @@ +/* + add_llm_provider_dialog.cpp MindForger thinking notebook + + Copyright (C) 2016-2026 Martin Dvorak + + This program is free software; you can redistribute it and/or + modify it under the terms of the GNU General Public License + as published by the Free Software Foundation; either version 2 + of the License, or (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . +*/ +#include "add_llm_provider_dialog.h" + +namespace m8r { + +using namespace std; + +AddLlmProviderDialog::AddLlmProviderDialog(QWidget* parent) + : QDialog(parent), + selectedProviderType(WINGMAN_PROVIDER_NONE) +{ + questionLabel = new QLabel(tr("Which provider do you want to configure?"), this); + + providerTypeCombo = new QComboBox(this); + providerTypeCombo->addItem(tr("OpenAI"), WINGMAN_PROVIDER_OPENAI); + providerTypeCombo->addItem(tr("ollama"), WINGMAN_PROVIDER_OLLAMA); + + nextButton = new QPushButton(tr("Next >"), this); + nextButton->setDefault(true); + + cancelButton = new QPushButton(tr("Cancel"), this); + + // layout + QVBoxLayout* mainLayout = new QVBoxLayout(this); + mainLayout->addWidget(questionLabel); + mainLayout->addWidget(providerTypeCombo); + + QHBoxLayout* buttonLayout = new QHBoxLayout(); + buttonLayout->addStretch(); + buttonLayout->addWidget(cancelButton); + buttonLayout->addWidget(nextButton); + + mainLayout->addLayout(buttonLayout); + setLayout(mainLayout); + + // signals + QObject::connect(nextButton, &QPushButton::clicked, this, &AddLlmProviderDialog::handleNext); + QObject::connect(cancelButton, &QPushButton::clicked, this, &QDialog::reject); + + // dialog + setWindowTitle(tr("New LLM Provider")); + resize(fontMetrics().averageCharWidth()*55, height()); + setModal(true); +} + +AddLlmProviderDialog::~AddLlmProviderDialog() +{ +} + +void AddLlmProviderDialog::show() +{ + providerTypeCombo->setCurrentIndex(0); + selectedProviderType = WINGMAN_PROVIDER_NONE; + + QDialog::show(); +} + +void AddLlmProviderDialog::handleNext() +{ + selectedProviderType = static_cast( + providerTypeCombo->itemData(providerTypeCombo->currentIndex()).toInt()); + + accept(); +} + +} diff --git a/app/src/qt/dialogs/add_llm_provider_dialog.h b/app/src/qt/dialogs/add_llm_provider_dialog.h new file mode 100644 index 00000000..4dcd1378 --- /dev/null +++ b/app/src/qt/dialogs/add_llm_provider_dialog.h @@ -0,0 +1,57 @@ +/* + add_llm_provider_dialog.h MindForger thinking notebook + + Copyright (C) 2016-2026 Martin Dvorak + + This program is free software; you can redistribute it and/or + modify it under the terms of the GNU General Public License + as published by the Free Software Foundation; either version 2 + of the License, or (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . +*/ +#ifndef M8RUI_ADD_LLM_PROVIDER_DIALOG_H +#define M8RUI_ADD_LLM_PROVIDER_DIALOG_H + +#include + +#include "../../lib/src/config/configuration.h" + +namespace m8r { + +class AddLlmProviderDialog : public QDialog +{ + Q_OBJECT + +private: + QLabel* questionLabel; + QComboBox* providerTypeCombo; + QPushButton* nextButton; + QPushButton* cancelButton; + + WingmanLlmProviders selectedProviderType; + +public: + explicit AddLlmProviderDialog(QWidget* parent); + AddLlmProviderDialog(const AddLlmProviderDialog&) = delete; + AddLlmProviderDialog(const AddLlmProviderDialog&&) = delete; + AddLlmProviderDialog& operator=(const AddLlmProviderDialog&) = delete; + AddLlmProviderDialog& operator=(const AddLlmProviderDialog&&) = delete; + ~AddLlmProviderDialog(); + + WingmanLlmProviders getSelectedProviderType() const { return selectedProviderType; } + + void show(); + +private slots: + void handleNext(); +}; + +} +#endif // M8RUI_ADD_LLM_PROVIDER_DIALOG_H diff --git a/app/src/qt/dialogs/configuration_dialog.cpp b/app/src/qt/dialogs/configuration_dialog.cpp index ec4be874..0c439c71 100644 --- a/app/src/qt/dialogs/configuration_dialog.cpp +++ b/app/src/qt/dialogs/configuration_dialog.cpp @@ -17,6 +17,9 @@ along with this program. If not, see . */ #include "configuration_dialog.h" +#include "add_llm_provider_dialog.h" +#include "openai_config_dialog.h" +#include "ollama_config_dialog.h" namespace m8r { @@ -34,9 +37,11 @@ ConfigurationDialog::ConfigurationDialog(QWidget* parent) navigatorTab = new NavigatorTab{this}; mindTab = new MindTab{this}; wingmanTab = new WingmanTab{this}; + wingman2Tab = new Wingman2Tab{this}; tabWidget->addTab(appTab, tr("Application")); tabWidget->addTab(wingmanTab, tr("Wingman")); + tabWidget->addTab(wingman2Tab, tr("Wingman2")); tabWidget->addTab(viewerTab, tr("Viewer")); tabWidget->addTab(editorTab, tr("Editor")); tabWidget->addTab(markdownTab, tr("Markdown")); @@ -78,6 +83,7 @@ void ConfigurationDialog::show() navigatorTab->refresh(); mindTab->refresh(); wingmanTab->refresh(); + wingman2Tab->refresh(); QDialog::show(); } @@ -91,6 +97,7 @@ void ConfigurationDialog::saveSlot() navigatorTab->save(); mindTab->save(); wingmanTab->save(); + wingman2Tab->save(); // callback: notify components on config change using signals defined in // the main window presenter @@ -1121,4 +1128,274 @@ void ConfigurationDialog::NavigatorTab::save() config.setNavigatorMaxNodes(maxNodesSpin->value()); } +/* + * Wingman2 tab + */ + +ConfigurationDialog::Wingman2Tab::Wingman2Tab(QWidget* parent) + : QWidget(parent), + config(Configuration::getInstance()) +{ + helpLabel = new QLabel( + tr("Configure Large Language Model (LLM) to be used by Wingman"), this); + + QLabel* providerLabel = new QLabel(tr("LLM Provider:"), this); + llmProvidersCombo = new QComboBox(this); + + addProviderButton = new QPushButton(tr("Add Provider"), this); + + QHBoxLayout* providerLayout = new QHBoxLayout(); + providerLayout->addWidget(providerLabel); + providerLayout->addWidget(llmProvidersCombo, 1); + providerLayout->addWidget(addProviderButton); + + // provider details group + providerDetailsGroup = new QGroupBox(tr("Selected Provider Details"), this); + + QLabel* typeLabel = new QLabel(tr("Provider Type:"), this); + providerTypeValue = new QLabel("", this); + + QLabel* mLabel = new QLabel(tr("Model:"), this); + modelValue = new QLabel("", this); + + QLabel* sLabel = new QLabel(tr("Status:"), this); + statusValue = new QLabel("", this); + + editButton = new QPushButton(tr("Edit"), this); + removeButton = new QPushButton(tr("Remove"), this); + testButton = new QPushButton(tr("Test Connection"), this); + + QGridLayout* detailsLayout = new QGridLayout(); + detailsLayout->addWidget(typeLabel, 0, 0); + detailsLayout->addWidget(providerTypeValue, 0, 1); + detailsLayout->addWidget(mLabel, 1, 0); + detailsLayout->addWidget(modelValue, 1, 1); + detailsLayout->addWidget(sLabel, 2, 0); + detailsLayout->addWidget(statusValue, 2, 1); + + QHBoxLayout* buttonLayout = new QHBoxLayout(); + buttonLayout->addWidget(editButton); + buttonLayout->addWidget(removeButton); + buttonLayout->addWidget(testButton); + buttonLayout->addStretch(); + + QVBoxLayout* groupLayout = new QVBoxLayout(); + groupLayout->addLayout(detailsLayout); + groupLayout->addLayout(buttonLayout); + + providerDetailsGroup->setLayout(groupLayout); + providerDetailsGroup->setVisible(false); + + // main layout + QVBoxLayout* mainLayout = new QVBoxLayout(this); + mainLayout->addWidget(helpLabel); + mainLayout->addSpacing(10); + mainLayout->addLayout(providerLayout); + mainLayout->addWidget(providerDetailsGroup); + mainLayout->addStretch(); + + setLayout(mainLayout); + + // signals + QObject::connect( + addProviderButton, &QPushButton::clicked, + this, &Wingman2Tab::handleAddProvider); + QObject::connect( + editButton, &QPushButton::clicked, + this, &Wingman2Tab::handleEditProvider); + QObject::connect( + removeButton, &QPushButton::clicked, + this, &Wingman2Tab::handleRemoveProvider); + QObject::connect( + testButton, &QPushButton::clicked, + this, &Wingman2Tab::handleTestConnection); + QObject::connect( + llmProvidersCombo, SIGNAL(currentIndexChanged(int)), + this, SLOT(handleProviderSelectionChanged(int))); +} + +ConfigurationDialog::Wingman2Tab::~Wingman2Tab() +{ +} + +void ConfigurationDialog::Wingman2Tab::refresh() +{ + // populate providers combo + llmProvidersCombo->clear(); + + vector& providers = config.getLlmProviders(); + if (providers.empty()) { + providerDetailsGroup->setVisible(false); + return; + } + + for (const auto& provider : providers) { + llmProvidersCombo->addItem( + QString::fromStdString(provider.displayName), + QString::fromStdString(provider.id)); + } + + // select active provider + LlmProviderConfig* activeProvider = config.getActiveLlmProvider(); + if (activeProvider) { + int index = llmProvidersCombo->findData( + QString::fromStdString(activeProvider->id)); + if (index >= 0) { + llmProvidersCombo->setCurrentIndex(index); + } + } + + handleProviderSelectionChanged(llmProvidersCombo->currentIndex()); +} + +void ConfigurationDialog::Wingman2Tab::save() +{ + // save active provider selection + if (llmProvidersCombo->count() > 0) { + QString providerId = llmProvidersCombo->itemData( + llmProvidersCombo->currentIndex()).toString(); + config.setActiveLlmProvider(providerId.toStdString()); + } +} + +void ConfigurationDialog::Wingman2Tab::handleAddProvider() +{ + AddLlmProviderDialog addDialog(this); + if (addDialog.exec() == QDialog::Accepted) { + WingmanLlmProviders providerType = addDialog.getSelectedProviderType(); + + if (providerType == WINGMAN_PROVIDER_OPENAI) { + OpenAiConfigDialog configDialog(this); + if (configDialog.exec() == QDialog::Accepted) { + config.addLlmProvider(configDialog.getProviderConfig()); + refresh(); + } + } else if (providerType == WINGMAN_PROVIDER_OLLAMA) { + OllamaConfigDialog configDialog(this); + if (configDialog.exec() == QDialog::Accepted) { + config.addLlmProvider(configDialog.getProviderConfig()); + refresh(); + } + } + } +} + +void ConfigurationDialog::Wingman2Tab::handleEditProvider() +{ + if (llmProvidersCombo->count() == 0) { + return; + } + + QString providerId = llmProvidersCombo->itemData( + llmProvidersCombo->currentIndex()).toString(); + LlmProviderConfig* provider = config.getLlmProviderById(providerId.toStdString()); + + if (!provider) { + return; + } + + // TODO: implement edit functionality + QMessageBox::information( + this, + tr("Edit Provider"), + tr("Edit functionality is not yet implemented.")); +} + +void ConfigurationDialog::Wingman2Tab::handleRemoveProvider() +{ + if (llmProvidersCombo->count() == 0) { + return; + } + + QString providerId = llmProvidersCombo->itemData( + llmProvidersCombo->currentIndex()).toString(); + + QMessageBox::StandardButton reply = QMessageBox::question( + this, + tr("Remove Provider"), + tr("Are you sure you want to remove this LLM provider configuration?"), + QMessageBox::Yes | QMessageBox::No); + + if (reply == QMessageBox::Yes) { + config.removeLlmProvider(providerId.toStdString()); + refresh(); + } +} + +void ConfigurationDialog::Wingman2Tab::handleTestConnection() +{ + if (llmProvidersCombo->count() == 0) { + return; + } + + QString providerId = llmProvidersCombo->itemData( + llmProvidersCombo->currentIndex()).toString(); + LlmProviderConfig* provider = config.getLlmProviderById(providerId.toStdString()); + + if (!provider) { + return; + } + + string errorMessage; + bool success = false; + + if (provider->providerType == WINGMAN_PROVIDER_OPENAI) { + success = config.probeOpenAiProvider( + provider->apiKey, provider->llmModel, errorMessage); + } else if (provider->providerType == WINGMAN_PROVIDER_OLLAMA) { + success = config.probeOllamaProvider( + provider->url, provider->llmModel, errorMessage); + } + + if (success) { + QMessageBox::information( + this, + tr("Connection Test"), + tr("Provider configuration is valid.")); + } else { + QMessageBox::critical( + this, + tr("Connection Test"), + tr("Provider configuration test failed: %1") + .arg(QString::fromStdString(errorMessage))); + } +} + +void ConfigurationDialog::Wingman2Tab::handleProviderSelectionChanged(int index) +{ + if (index < 0 || llmProvidersCombo->count() == 0) { + providerDetailsGroup->setVisible(false); + return; + } + + QString providerId = llmProvidersCombo->itemData(index).toString(); + LlmProviderConfig* provider = config.getLlmProviderById(providerId.toStdString()); + + if (!provider) { + providerDetailsGroup->setVisible(false); + return; + } + + // update details + if (provider->providerType == WINGMAN_PROVIDER_OPENAI) { + providerTypeValue->setText("OpenAI"); + } else if (provider->providerType == WINGMAN_PROVIDER_OLLAMA) { + providerTypeValue->setText("ollama"); + } else { + providerTypeValue->setText("Unknown"); + } + + modelValue->setText(QString::fromStdString(provider->llmModel)); + + if (provider->isValid) { + statusValue->setText(tr("Configured ✓")); + statusValue->setStyleSheet("QLabel { color: green; }"); + } else { + statusValue->setText(tr("Not validated")); + statusValue->setStyleSheet("QLabel { color: orange; }"); + } + + providerDetailsGroup->setVisible(true); +} + } // m8r namespace diff --git a/app/src/qt/dialogs/configuration_dialog.h b/app/src/qt/dialogs/configuration_dialog.h index 95b0eb96..6dbb63c9 100644 --- a/app/src/qt/dialogs/configuration_dialog.h +++ b/app/src/qt/dialogs/configuration_dialog.h @@ -41,6 +41,7 @@ class ConfigurationDialog : public QDialog class WingmanTab; class WingmanOpenAiTab; class WingmanOllamaTab; + class Wingman2Tab; private: QTabWidget* tabWidget; @@ -51,6 +52,7 @@ class ConfigurationDialog : public QDialog NavigatorTab* navigatorTab; MindTab* mindTab; WingmanTab* wingmanTab; + Wingman2Tab* wingman2Tab; QDialogButtonBox *buttonBox; @@ -357,5 +359,45 @@ class ConfigurationDialog::MarkdownTab : public QWidget void save(); }; +/** + * @brief Wingman2 tab for managing LLM providers. + */ +class ConfigurationDialog::Wingman2Tab : public QWidget +{ + Q_OBJECT + +private: + Configuration& config; + + QLabel* helpLabel; + QComboBox* llmProvidersCombo; + QPushButton* addProviderButton; + + QGroupBox* providerDetailsGroup; + QLabel* providerTypeLabel; + QLabel* providerTypeValue; + QLabel* modelLabel; + QLabel* modelValue; + QLabel* statusLabel; + QLabel* statusValue; + QPushButton* editButton; + QPushButton* removeButton; + QPushButton* testButton; + +public: + explicit Wingman2Tab(QWidget* parent); + ~Wingman2Tab(); + + void refresh(); + void save(); + +private slots: + void handleAddProvider(); + void handleEditProvider(); + void handleRemoveProvider(); + void handleTestConnection(); + void handleProviderSelectionChanged(int index); +}; + } #endif // M8RUI_CONFIGURATION_DIALOG_H diff --git a/app/src/qt/dialogs/ollama_config_dialog.cpp b/app/src/qt/dialogs/ollama_config_dialog.cpp new file mode 100644 index 00000000..09ef563f --- /dev/null +++ b/app/src/qt/dialogs/ollama_config_dialog.cpp @@ -0,0 +1,224 @@ +/* + ollama_config_dialog.cpp MindForger thinking notebook + + Copyright (C) 2016-2026 Martin Dvorak + + This program is free software; you can redistribute it and/or + modify it under the terms of the GNU General Public License + as published by the Free Software Foundation; either version 2 + of the License, or (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . +*/ +#include "ollama_config_dialog.h" + +namespace m8r { + +using namespace std; + +OllamaConfigDialog::OllamaConfigDialog(QWidget* parent) + : QDialog(parent), + config(Configuration::getInstance()), + configValid(false) +{ + // URL field + QLabel* urlLabel = new QLabel(tr("ollama Server URL:"), this); + urlEdit = new QLineEdit(this); + urlEdit->setText(DEFAULT_OLLAMA_URL); + + resetButton = new QPushButton(tr("Reset"), this); + + QHBoxLayout* urlLayout = new QHBoxLayout(); + urlLayout->addWidget(urlEdit); + urlLayout->addWidget(resetButton); + + // LLM model combo (editable) + QLabel* modelLabel = new QLabel(tr("LLM Model:"), this); + llmModelCombo = new QComboBox(this); + llmModelCombo->setEditable(true); + + refreshModelsButton = new QPushButton(tr("Refresh"), this); + + QHBoxLayout* modelLayout = new QHBoxLayout(); + modelLayout->addWidget(llmModelCombo); + modelLayout->addWidget(refreshModelsButton); + + QLabel* modelHelpLabel = new QLabel( + tr("(You can type model name or select from list)"), this); + modelHelpLabel->setStyleSheet("QLabel { color: gray; font-size: small; }"); + + // buttons + probeButton = new QPushButton(tr("Probe"), this); + addButton = new QPushButton(tr("Add"), this); + addButton->setDefault(true); + cancelButton = new QPushButton(tr("Cancel"), this); + + QHBoxLayout* buttonLayout = new QHBoxLayout(); + buttonLayout->addWidget(probeButton); + buttonLayout->addStretch(); + buttonLayout->addWidget(cancelButton); + buttonLayout->addWidget(addButton); + + // main layout + QVBoxLayout* mainLayout = new QVBoxLayout(this); + mainLayout->addWidget(urlLabel); + mainLayout->addLayout(urlLayout); + mainLayout->addSpacing(10); + mainLayout->addWidget(modelLabel); + mainLayout->addLayout(modelLayout); + mainLayout->addWidget(modelHelpLabel); + mainLayout->addSpacing(20); + mainLayout->addLayout(buttonLayout); + + setLayout(mainLayout); + + // signals + QObject::connect(resetButton, &QPushButton::clicked, this, &OllamaConfigDialog::handleReset); + QObject::connect(refreshModelsButton, &QPushButton::clicked, this, &OllamaConfigDialog::handleRefresh); + QObject::connect(probeButton, &QPushButton::clicked, this, &OllamaConfigDialog::handleProbe); + QObject::connect(addButton, &QPushButton::clicked, this, &OllamaConfigDialog::handleAdd); + QObject::connect(cancelButton, &QPushButton::clicked, this, &QDialog::reject); + + // dialog + setWindowTitle(tr("Configure ollama Provider")); + resize(fontMetrics().averageCharWidth()*60, height()); + setModal(true); +} + +OllamaConfigDialog::~OllamaConfigDialog() +{ +} + +void OllamaConfigDialog::show() +{ + // load current config if any + string ollamaUrl = config.getWingmanOllamaUrl(); + if (ollamaUrl.empty()) { + urlEdit->setText(DEFAULT_OLLAMA_URL); + } else { + urlEdit->setText(QString::fromStdString(ollamaUrl)); + } + + llmModelCombo->clear(); + configValid = false; + + QDialog::show(); +} + +void OllamaConfigDialog::handleReset() +{ + urlEdit->setText(DEFAULT_OLLAMA_URL); + llmModelCombo->clear(); +} + +void OllamaConfigDialog::handleRefresh() +{ + string url = urlEdit->text().toStdString(); + if (url.empty()) { + QMessageBox::warning( + this, + tr("URL Required"), + tr("Please enter the ollama server URL.")); + return; + } + + try { + OllamaWingman wingman(url); + vector& models = wingman.listModels(); + + llmModelCombo->clear(); + for (const auto& model : models) { + llmModelCombo->addItem(QString::fromStdString(model)); + } + + if (models.empty()) { + QMessageBox::warning( + this, + tr("No Models Found"), + tr("No models found on ollama server. Please ensure ollama is running and has models installed.")); + } else { + QMessageBox::information( + this, + tr("Models Refreshed"), + tr("Successfully fetched %1 models from ollama server.").arg(models.size())); + } + } catch (const exception& e) { + QMessageBox::critical( + this, + tr("Refresh Failed"), + tr("Failed to fetch models from ollama server: %1").arg(e.what())); + } +} + +void OllamaConfigDialog::handleProbe() +{ + string url = urlEdit->text().toStdString(); + string model = llmModelCombo->currentText().toStdString(); + string errorMessage; + + if (config.probeOllamaProvider(url, model, errorMessage)) { + QMessageBox::information( + this, + tr("Configuration Valid"), + tr("ollama provider configuration is valid.")); + configValid = true; + } else { + QMessageBox::critical( + this, + tr("Configuration Invalid"), + tr("ollama provider configuration is invalid: %1") + .arg(QString::fromStdString(errorMessage))); + configValid = false; + } +} + +void OllamaConfigDialog::handleAdd() +{ + string url = urlEdit->text().toStdString(); + string model = llmModelCombo->currentText().toStdString(); + + // validate inputs + if (url.empty()) { + QMessageBox::warning( + this, + tr("URL Required"), + tr("Please enter the ollama server URL.")); + return; + } + + if (model.empty()) { + QMessageBox::warning( + this, + tr("Model Required"), + tr("Please select or enter a model name.")); + return; + } + + // generate unique ID using timestamp + auto now = chrono::system_clock::now(); + auto timestamp = chrono::duration_cast(now.time_since_epoch()).count(); + + // extract host from URL for display name + string host = url; + size_t pos = url.find("://"); + if (pos != string::npos) { + host = url.substr(pos + 3); + } + + providerConfig.id = "ollama-" + to_string(timestamp); + providerConfig.displayName = "ollama " + model + " @ " + host; + providerConfig.providerType = WINGMAN_PROVIDER_OLLAMA; + providerConfig.url = url; + providerConfig.llmModel = model; + providerConfig.isValid = configValid; + + accept(); +} + +} diff --git a/app/src/qt/dialogs/ollama_config_dialog.h b/app/src/qt/dialogs/ollama_config_dialog.h new file mode 100644 index 00000000..36854d9c --- /dev/null +++ b/app/src/qt/dialogs/ollama_config_dialog.h @@ -0,0 +1,69 @@ +/* + ollama_config_dialog.h MindForger thinking notebook + + Copyright (C) 2016-2026 Martin Dvorak + + This program is free software; you can redistribute it and/or + modify it under the terms of the GNU General Public License + as published by the Free Software Foundation; either version 2 + of the License, or (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . +*/ +#ifndef M8RUI_OLLAMA_CONFIG_DIALOG_H +#define M8RUI_OLLAMA_CONFIG_DIALOG_H + +#include +#include + +#include "../../lib/src/config/configuration.h" +#include "../../lib/src/mind/ai/llm/ollama_wingman.h" + +namespace m8r { + +class OllamaConfigDialog : public QDialog +{ + Q_OBJECT + +private: + Configuration& config; + + QLineEdit* urlEdit; + QPushButton* resetButton; + QComboBox* llmModelCombo; + QPushButton* refreshModelsButton; + QPushButton* probeButton; + QPushButton* addButton; + QPushButton* cancelButton; + + LlmProviderConfig providerConfig; + bool configValid; + +public: + explicit OllamaConfigDialog(QWidget* parent); + OllamaConfigDialog(const OllamaConfigDialog&) = delete; + OllamaConfigDialog(const OllamaConfigDialog&&) = delete; + OllamaConfigDialog& operator=(const OllamaConfigDialog&) = delete; + OllamaConfigDialog& operator=(const OllamaConfigDialog&&) = delete; + ~OllamaConfigDialog(); + + const LlmProviderConfig& getProviderConfig() const { return providerConfig; } + bool isConfigValid() const { return configValid; } + + void show(); + +private slots: + void handleReset(); + void handleRefresh(); + void handleProbe(); + void handleAdd(); +}; + +} +#endif // M8RUI_OLLAMA_CONFIG_DIALOG_H diff --git a/app/src/qt/dialogs/openai_config_dialog.cpp b/app/src/qt/dialogs/openai_config_dialog.cpp new file mode 100644 index 00000000..1a266150 --- /dev/null +++ b/app/src/qt/dialogs/openai_config_dialog.cpp @@ -0,0 +1,225 @@ +/* + openai_config_dialog.cpp MindForger thinking notebook + + Copyright (C) 2016-2026 Martin Dvorak + + This program is free software; you can redistribute it and/or + modify it under the terms of the GNU General Public License + as published by the Free Software Foundation; either version 2 + of the License, or (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . +*/ +#include "openai_config_dialog.h" + +namespace m8r { + +using namespace std; + +OpenAiConfigDialog::OpenAiConfigDialog(QWidget* parent) + : QDialog(parent), + config(Configuration::getInstance()), + configValid(false) +{ + // API key field + QLabel* apiKeyLabel = new QLabel(tr("API Key:"), this); + apiKeyEdit = new QLineEdit(this); + apiKeyEdit->setEchoMode(QLineEdit::Password); + + resetButton = new QPushButton(tr("Reset"), this); + + QHBoxLayout* apiKeyLayout = new QHBoxLayout(); + apiKeyLayout->addWidget(apiKeyEdit); + apiKeyLayout->addWidget(resetButton); + + // environment variable info + envVarInfoLabel = new QLabel( + tr("Environment variable: %1
(if set, overrides the value above)") + .arg(ENV_VAR_OPENAI_API_KEY), this); + + // LLM model combo (editable) + QLabel* modelLabel = new QLabel(tr("LLM Model:"), this); + llmModelCombo = new QComboBox(this); + llmModelCombo->setEditable(true); + llmModelCombo->addItem(LLM_MODEL_GPT35_TURBO); + llmModelCombo->addItem(LLM_MODEL_GPT4); + + refreshModelsButton = new QPushButton(tr("Refresh"), this); + + QHBoxLayout* modelLayout = new QHBoxLayout(); + modelLayout->addWidget(llmModelCombo); + modelLayout->addWidget(refreshModelsButton); + + QLabel* modelHelpLabel = new QLabel( + tr("(You can type model name or select from list)"), this); + modelHelpLabel->setStyleSheet("QLabel { color: gray; font-size: small; }"); + + // buttons + probeButton = new QPushButton(tr("Probe"), this); + addButton = new QPushButton(tr("Add"), this); + addButton->setDefault(true); + cancelButton = new QPushButton(tr("Cancel"), this); + + QHBoxLayout* buttonLayout = new QHBoxLayout(); + buttonLayout->addWidget(probeButton); + buttonLayout->addStretch(); + buttonLayout->addWidget(cancelButton); + buttonLayout->addWidget(addButton); + + // main layout + QVBoxLayout* mainLayout = new QVBoxLayout(this); + mainLayout->addWidget(apiKeyLabel); + mainLayout->addLayout(apiKeyLayout); + mainLayout->addWidget(envVarInfoLabel); + mainLayout->addSpacing(10); + mainLayout->addWidget(modelLabel); + mainLayout->addLayout(modelLayout); + mainLayout->addWidget(modelHelpLabel); + mainLayout->addSpacing(20); + mainLayout->addLayout(buttonLayout); + + setLayout(mainLayout); + + // signals + QObject::connect(resetButton, &QPushButton::clicked, this, &OpenAiConfigDialog::handleReset); + QObject::connect(refreshModelsButton, &QPushButton::clicked, this, &OpenAiConfigDialog::handleRefresh); + QObject::connect(probeButton, &QPushButton::clicked, this, &OpenAiConfigDialog::handleProbe); + QObject::connect(addButton, &QPushButton::clicked, this, &OpenAiConfigDialog::handleAdd); + QObject::connect(cancelButton, &QPushButton::clicked, this, &QDialog::reject); + + // dialog + setWindowTitle(tr("Configure OpenAI Provider")); + resize(fontMetrics().averageCharWidth()*60, height()); + setModal(true); +} + +OpenAiConfigDialog::~OpenAiConfigDialog() +{ +} + +void OpenAiConfigDialog::show() +{ + // load current config if any + apiKeyEdit->setText(QString::fromStdString(config.getWingmanOpenAiApiKey())); + llmModelCombo->setCurrentText(LLM_MODEL_GPT35_TURBO); + configValid = false; + + QDialog::show(); +} + +void OpenAiConfigDialog::handleReset() +{ + apiKeyEdit->clear(); + llmModelCombo->setCurrentText(LLM_MODEL_GPT35_TURBO); +} + +void OpenAiConfigDialog::handleRefresh() +{ + // validate API key is set + string apiKey = apiKeyEdit->text().toStdString(); + if (apiKey.empty() && !config.canWingmanOpenAiFromEnv()) { + QMessageBox::warning( + this, + tr("API Key Required"), + tr("Please enter an API key or set the %1 environment variable before refreshing models.") + .arg(ENV_VAR_OPENAI_API_KEY)); + return; + } + + // temporarily set API key to fetch models + string originalKey = config.getWingmanOpenAiApiKey(); + if (!apiKey.empty()) { + config.setWingmanOpenAiApiKey(apiKey); + } + + try { + OpenAiWingman wingman; + vector& models = wingman.listModels(); + + llmModelCombo->clear(); + for (const auto& model : models) { + llmModelCombo->addItem(QString::fromStdString(model)); + } + + if (!models.empty()) { + QMessageBox::information( + this, + tr("Models Refreshed"), + tr("Successfully fetched %1 models from OpenAI API.").arg(models.size())); + } + } catch (const exception& e) { + QMessageBox::critical( + this, + tr("Refresh Failed"), + tr("Failed to fetch models from OpenAI API: %1").arg(e.what())); + } + + // restore original key + config.setWingmanOpenAiApiKey(originalKey); +} + +void OpenAiConfigDialog::handleProbe() +{ + string apiKey = apiKeyEdit->text().toStdString(); + string model = llmModelCombo->currentText().toStdString(); + string errorMessage; + + if (config.probeOpenAiProvider(apiKey, model, errorMessage)) { + QMessageBox::information( + this, + tr("Configuration Valid"), + tr("OpenAI provider configuration is valid.")); + configValid = true; + } else { + QMessageBox::critical( + this, + tr("Configuration Invalid"), + tr("OpenAI provider configuration is invalid: %1") + .arg(QString::fromStdString(errorMessage))); + configValid = false; + } +} + +void OpenAiConfigDialog::handleAdd() +{ + string apiKey = apiKeyEdit->text().toStdString(); + string model = llmModelCombo->currentText().toStdString(); + + // validate inputs + if (apiKey.empty() && !config.canWingmanOpenAiFromEnv()) { + QMessageBox::warning( + this, + tr("API Key Required"), + tr("Please enter an API key or set the %1 environment variable.") + .arg(ENV_VAR_OPENAI_API_KEY)); + return; + } + + if (model.empty()) { + QMessageBox::warning( + this, + tr("Model Required"), + tr("Please select or enter a model name.")); + return; + } + + // generate unique ID using timestamp + auto now = chrono::system_clock::now(); + auto timestamp = chrono::duration_cast(now.time_since_epoch()).count(); + providerConfig.id = "openai-" + to_string(timestamp); + providerConfig.displayName = "OpenAI " + model; + providerConfig.providerType = WINGMAN_PROVIDER_OPENAI; + providerConfig.apiKey = apiKey; + providerConfig.llmModel = model; + providerConfig.isValid = configValid; + + accept(); +} + +} diff --git a/app/src/qt/dialogs/openai_config_dialog.h b/app/src/qt/dialogs/openai_config_dialog.h new file mode 100644 index 00000000..289d1021 --- /dev/null +++ b/app/src/qt/dialogs/openai_config_dialog.h @@ -0,0 +1,70 @@ +/* + openai_config_dialog.h MindForger thinking notebook + + Copyright (C) 2016-2026 Martin Dvorak + + This program is free software; you can redistribute it and/or + modify it under the terms of the GNU General Public License + as published by the Free Software Foundation; either version 2 + of the License, or (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . +*/ +#ifndef M8RUI_OPENAI_CONFIG_DIALOG_H +#define M8RUI_OPENAI_CONFIG_DIALOG_H + +#include +#include + +#include "../../lib/src/config/configuration.h" +#include "../../lib/src/mind/ai/llm/openai_wingman.h" + +namespace m8r { + +class OpenAiConfigDialog : public QDialog +{ + Q_OBJECT + +private: + Configuration& config; + + QLineEdit* apiKeyEdit; + QPushButton* resetButton; + QLabel* envVarInfoLabel; + QComboBox* llmModelCombo; + QPushButton* refreshModelsButton; + QPushButton* probeButton; + QPushButton* addButton; + QPushButton* cancelButton; + + LlmProviderConfig providerConfig; + bool configValid; + +public: + explicit OpenAiConfigDialog(QWidget* parent); + OpenAiConfigDialog(const OpenAiConfigDialog&) = delete; + OpenAiConfigDialog(const OpenAiConfigDialog&&) = delete; + OpenAiConfigDialog& operator=(const OpenAiConfigDialog&) = delete; + OpenAiConfigDialog& operator=(const OpenAiConfigDialog&&) = delete; + ~OpenAiConfigDialog(); + + const LlmProviderConfig& getProviderConfig() const { return providerConfig; } + bool isConfigValid() const { return configValid; } + + void show(); + +private slots: + void handleReset(); + void handleRefresh(); + void handleProbe(); + void handleAdd(); +}; + +} +#endif // M8RUI_OPENAI_CONFIG_DIALOG_H diff --git a/build/debian/debian/copyright b/build/debian/debian/copyright index 42e95756..b5a4d6b6 100644 --- a/build/debian/debian/copyright +++ b/build/debian/debian/copyright @@ -3,7 +3,7 @@ Upstream-Name: mindforger Source: https://github.com/dvorka/mindforger Files: debian/* -Copyright: 2016-2024 Martin Dvorak +Copyright: 2016-2026 Martin Dvorak License: GPL-2+ MindForger is licensed under GNU GPL version 2 or any later version. . diff --git a/build/ubuntu/debian/copyright b/build/ubuntu/debian/copyright index 42e95756..b5a4d6b6 100644 --- a/build/ubuntu/debian/copyright +++ b/build/ubuntu/debian/copyright @@ -3,7 +3,7 @@ Upstream-Name: mindforger Source: https://github.com/dvorka/mindforger Files: debian/* -Copyright: 2016-2024 Martin Dvorak +Copyright: 2016-2026 Martin Dvorak License: GPL-2+ MindForger is licensed under GNU GPL version 2 or any later version. . diff --git a/lib/src/app_info.h b/lib/src/app_info.h index bf129285..a68c2160 100644 --- a/lib/src/app_info.h +++ b/lib/src/app_info.h @@ -8,5 +8,5 @@ #define MINDFORGER_APP_AUTHOR "Martin Dvorak" #define MINDFORGER_APP_URL "https://www.mindforger.com" #define MINDFORGER_APP_COMPANY MINDFORGER_APP_NAME -#define MINDFORGER_APP_LEGAL "\xA9 2016-2024 Martin Dvorak. All Rights Reserved" +#define MINDFORGER_APP_LEGAL "\xA9 2016-2026 Martin Dvorak. All Rights Reserved" #define MINDFORGER_APP_EXE "mindforger.exe" diff --git a/lib/src/config/configuration.cpp b/lib/src/config/configuration.cpp index 60afa395..096ab64a 100644 --- a/lib/src/config/configuration.cpp +++ b/lib/src/config/configuration.cpp @@ -574,4 +574,146 @@ bool Configuration::isWingman() { return WingmanLlmProviders::WINGMAN_PROVIDER_NONE==wingmanProvider?false:true; } +/* + * Wingman2 LLM Provider Management + */ + +LlmProviderConfig* Configuration::getLlmProviderById(const string& id) { + for (auto& provider : llmProviders) { + if (provider.id == id) { + return &provider; + } + } + return nullptr; +} + +LlmProviderConfig* Configuration::getActiveLlmProvider() { + if (activeLlmProviderId.empty()) { + return nullptr; + } + return getLlmProviderById(activeLlmProviderId); +} + +void Configuration::addLlmProvider(const LlmProviderConfig& provider) { + llmProviders.push_back(provider); + MF_DEBUG("Configuration::addLlmProvider() added: " << provider.id << endl); +} + +void Configuration::updateLlmProvider(const string& id, const LlmProviderConfig& provider) { + for (auto& p : llmProviders) { + if (p.id == id) { + p = provider; + MF_DEBUG("Configuration::updateLlmProvider() updated: " << id << endl); + return; + } + } +} + +void Configuration::removeLlmProvider(const string& id) { + llmProviders.erase( + std::remove_if( + llmProviders.begin(), + llmProviders.end(), + [&id](const LlmProviderConfig& p) { return p.id == id; }), + llmProviders.end()); + + // if active provider was removed, clear active provider + if (activeLlmProviderId == id) { + activeLlmProviderId.clear(); + } + + MF_DEBUG("Configuration::removeLlmProvider() removed: " << id << endl); +} + +void Configuration::setActiveLlmProvider(const string& id) { + activeLlmProviderId = id; + MF_DEBUG("Configuration::setActiveLlmProvider() set to: " << id << endl); +} + +bool Configuration::probeOpenAiProvider( + const string& apiKey, + const string& model, + string& errorMessage) +{ + // basic validation + if (apiKey.empty() && !canWingmanOpenAiFromEnv()) { + errorMessage = "API key is required for OpenAI provider"; + return false; + } + + if (model.empty()) { + errorMessage = "Model name is required"; + return false; + } + + // TODO: actually test the connection by calling OpenAI API + // For now, just validate the inputs + MF_DEBUG("Configuration::probeOpenAiProvider() validated: " << model << endl); + return true; +} + +bool Configuration::probeOllamaProvider( + const string& url, + const string& model, + string& errorMessage) +{ + // basic validation + if (url.empty()) { + errorMessage = "URL is required for ollama provider"; + return false; + } + + if (model.empty()) { + errorMessage = "Model name is required"; + return false; + } + + // TODO: actually test the connection by calling ollama API + // For now, just validate the inputs + MF_DEBUG("Configuration::probeOllamaProvider() validated: " << url << ", " << model << endl); + return true; +} + +void Configuration::migrateFromLegacyWingmanConfig() { + // check if already migrated or no legacy config + if (!llmProviders.empty()) { + return; + } + + if (wingmanProvider == WINGMAN_PROVIDER_NONE) { + return; + } + + MF_DEBUG("Configuration::migrateFromLegacyWingmanConfig() migrating..." << endl); + + // migrate based on provider type + if (wingmanProvider == WINGMAN_PROVIDER_OPENAI && canWingmanOpenAi()) { + LlmProviderConfig provider; + provider.id = "legacy-openai"; + provider.displayName = "OpenAI (migrated)"; + provider.providerType = WINGMAN_PROVIDER_OPENAI; + provider.apiKey = wingmanOpenAiApiKey; + provider.llmModel = wingmanOpenAiLlm; + provider.isValid = true; + + addLlmProvider(provider); + setActiveLlmProvider(provider.id); + + MF_DEBUG("Configuration::migrateFromLegacyWingmanConfig() migrated OpenAI" << endl); + } else if (wingmanProvider == WINGMAN_PROVIDER_OLLAMA && canWingmanOllama()) { + LlmProviderConfig provider; + provider.id = "legacy-ollama"; + provider.displayName = "ollama (migrated)"; + provider.providerType = WINGMAN_PROVIDER_OLLAMA; + provider.url = wingmanOllamaUrl; + provider.llmModel = wingmanOllamaLlm; + provider.isValid = true; + + addLlmProvider(provider); + setActiveLlmProvider(provider.id); + + MF_DEBUG("Configuration::migrateFromLegacyWingmanConfig() migrated ollama" << endl); + } +} + } // m8r namespace diff --git a/lib/src/config/configuration.h b/lib/src/config/configuration.h index 66bc629a..c3582ee6 100644 --- a/lib/src/config/configuration.h +++ b/lib/src/config/configuration.h @@ -59,6 +59,10 @@ constexpr const auto LLM_MODEL_GPT4 = "gpt-4"; constexpr const auto LLM_MODEL_LLAMA2 = "llama2"; constexpr const auto LLM_MODEL_PHI = "phi"; +// Default URLs for LLM providers +constexpr const auto DEFAULT_OLLAMA_URL = "http://localhost:11434"; +constexpr const auto DEFAULT_OPENAI_API_URL = "https://api.openai.com/v1"; + // const in constexpr makes value const constexpr const auto ENV_VAR_HOME = "HOME"; constexpr const auto ENV_VAR_DISPLAY = "DISPLAY"; @@ -186,6 +190,26 @@ struct KnowledgeTool constexpr const auto ENV_VAR_OPENAI_API_KEY = "MINDFORGER_OPENAI_API_KEY"; constexpr const auto ENV_VAR_OPENAI_LLM_MODEL = "MINDFORGER_OPENAI_LLM_MODEL"; +/** + * @brief LLM Provider Configuration + * + * Represents configuration for a single Large Language Model provider. + * Supports OpenAI and ollama providers with provider-specific fields. + */ +struct LlmProviderConfig { + std::string id; // unique identifier (e.g., "openai-1", "ollama-local") + std::string displayName; // user-friendly name (e.g., "OpenAI GPT-4", "Local Ollama") + WingmanLlmProviders providerType; // WINGMAN_PROVIDER_OPENAI, WINGMAN_PROVIDER_OLLAMA + std::string url; // for ollama: base URL, for OpenAI: empty + std::string apiKey; // for OpenAI: API key, for ollama: empty + std::string llmModel; // model name (e.g., "gpt-4", "llama2") + bool isValid; // whether configuration was validated/probed + + LlmProviderConfig() + : providerType(WINGMAN_PROVIDER_NONE), + isValid(false) {} +}; + // improve platform/language specific constexpr const auto DEFAULT_NEW_OUTLINE = "# New Markdown File\n\nThis is a new Markdown file created by MindForger.\n\n#Section 1\nThe first section.\n\n"; @@ -394,6 +418,10 @@ class Configuration { std::string wingmanOllamaUrl; // base URL like http://localhost:11434 std::string wingmanOllamaLlm; + // Wingman2: collection of configured LLM providers + std::vector llmProviders; + std::string activeLlmProviderId; + TimeScope timeScope; std::string timeScopeAsString; std::vector tagsScope; @@ -604,6 +632,67 @@ class Configuration { std::string getWingmanOllamaLlm() const { return wingmanOllamaLlm; } void setWingmanOllamaLlm(std::string llm) { wingmanOllamaLlm = llm; } + /* + * Wingman2 LLM Provider Management + */ + + /** + * @brief Get all configured LLM providers + */ + std::vector& getLlmProviders() { return llmProviders; } + + /** + * @brief Get LLM provider by ID + * @return Pointer to provider or nullptr if not found + */ + LlmProviderConfig* getLlmProviderById(const std::string& id); + + /** + * @brief Get the active LLM provider + * @return Pointer to active provider or nullptr if none set + */ + LlmProviderConfig* getActiveLlmProvider(); + + /** + * @brief Add a new LLM provider configuration + * @param provider The provider configuration to add + */ + void addLlmProvider(const LlmProviderConfig& provider); + + /** + * @brief Update an existing LLM provider + * @param id Provider ID to update + * @param provider Updated configuration + */ + void updateLlmProvider(const std::string& id, const LlmProviderConfig& provider); + + /** + * @brief Remove an LLM provider + * @param id Provider ID to remove + */ + void removeLlmProvider(const std::string& id); + + /** + * @brief Set the active LLM provider + * @param id Provider ID to activate + */ + void setActiveLlmProvider(const std::string& id); + + /** + * @brief Probe/validate OpenAI provider configuration + */ + bool probeOpenAiProvider(const std::string& apiKey, const std::string& model, std::string& errorMessage); + + /** + * @brief Probe/validate ollama provider configuration + */ + bool probeOllamaProvider(const std::string& url, const std::string& model, std::string& errorMessage); + + /** + * @brief Migrate from legacy Wingman configuration to Wingman2 + */ + void migrateFromLegacyWingmanConfig(); + /** * @brief Check whether a Wingman LLM provider is ready from * the configuration perspective. diff --git a/lib/src/mind/ai/llm/ollama_wingman.cpp b/lib/src/mind/ai/llm/ollama_wingman.cpp index 59f40c42..a8b48bd7 100644 --- a/lib/src/mind/ai/llm/ollama_wingman.cpp +++ b/lib/src/mind/ai/llm/ollama_wingman.cpp @@ -150,7 +150,7 @@ void OllamaWingman::listModelsHttpGet() { MF_DEBUG(" name: " << item.value()["name"] << endl); string llmModelName{item.value()["name"]}; // add model to list - this->llmModels.push_back(llmModel); + this->llmModels.push_back(llmModelName); } } } diff --git a/lib/src/mind/ai/llm/openai_wingman.cpp b/lib/src/mind/ai/llm/openai_wingman.cpp index f0eb17ae..d88afbcb 100644 --- a/lib/src/mind/ai/llm/openai_wingman.cpp +++ b/lib/src/mind/ai/llm/openai_wingman.cpp @@ -68,13 +68,119 @@ std::vector& OpenAiWingman::listModels() { llmModels.clear(); - // TODO list models using OpenAI API - will many models be confusing for user? - llmModels.push_back(LLM_GPT_35_TURBO); - llmModels.push_back(LLM_GPT_4); + // try to fetch models from OpenAI API + try { + listModelsHttpGet(); + } catch (...) { + MF_DEBUG("OpenAiWingman::listModels() failed to fetch from API, using defaults" << endl); + } + + // if API call failed or returned no models, use defaults + if (llmModels.empty()) { + llmModels.push_back(LLM_GPT_35_TURBO); + llmModels.push_back(LLM_GPT_4); + } return llmModels; } +void OpenAiWingman::listModelsHttpGet() +{ + string url = "https://api.openai.com/v1/models"; + + MF_DEBUG("OpenAiWingman::listModelsHttpGet() url: " << url << endl); + +#if !defined(__APPLE__) && !defined(_WIN32) + CURL* curl = curl_easy_init(); + if (!curl) { + return; + } +#endif + + string responseString; + +#if defined(_WIN32) || defined(__APPLE__) + QNetworkAccessManager networkManager; + + QNetworkRequest request(QUrl(QString::fromStdString(url))); + request.setHeader( + QNetworkRequest::ContentTypeHeader, + "application/json"); + request.setRawHeader( + "Authorization", + "Bearer " + QString::fromStdString(config.getWingmanOpenAiApiKey()).toUtf8()); + + QNetworkReply* reply = networkManager.get(request); + QEventLoop loop; + QObject::connect(reply, &QNetworkReply::finished, &loop, &QEventLoop::quit); + loop.exec(); + + auto error = reply->error(); + if (error != QNetworkReply::NoError) { + MF_DEBUG("OpenAiWingman::listModelsHttpGet() error: " << reply->errorString().toStdString() << endl); + reply->deleteLater(); + return; + } + + QByteArray read = reply->readAll(); + responseString = QString{read}.toStdString(); + reply->deleteLater(); +#else + // CURL implementation + curl_easy_setopt(curl, CURLOPT_HTTPGET, 1); + curl_easy_setopt(curl, CURLOPT_URL, url.c_str()); + curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, openaiCurlWriteCallback); + curl_easy_setopt(curl, CURLOPT_WRITEDATA, &responseString); + + struct curl_slist* headers = NULL; + headers = curl_slist_append( + headers, + ("Authorization: Bearer " + config.getWingmanOpenAiApiKey()).c_str()); + curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers); + + CURLcode res = curl_easy_perform(curl); + curl_easy_cleanup(curl); + curl_slist_free_all(headers); + + if (res != CURLE_OK) { + MF_DEBUG("OpenAiWingman::listModelsHttpGet() error: " << curl_easy_strerror(res) << endl); + return; + } +#endif + + // parse JSON response + nlohmann::json httpResponseJson; + try { + httpResponseJson = nlohmann::json::parse(responseString); + } catch (...) { + MF_DEBUG( + "Error: unable to parse OpenAI models JSON response:" << endl << + "'" << responseString << "'" << endl + ); + return; + } + + MF_DEBUG( + "OpenAiWingman::listModelsHttpGet() parsed response:" << endl + << ">>>" + << httpResponseJson.dump(4) + << "<<<" + << endl); + + if (httpResponseJson.contains("data")) { + for (const auto& item : httpResponseJson["data"].items()) { + if (item.value().contains("id")) { + string modelId = item.value()["id"]; + // filter to only include GPT models (optional) + if (modelId.find("gpt") != string::npos) { + llmModels.push_back(modelId); + MF_DEBUG(" Added model: " << modelId << endl); + } + } + } + } +} + // TODO refactor to parent class so that all wingmans can use it /** * OpenAI cURL GET request. diff --git a/lib/src/mind/ai/llm/openai_wingman.h b/lib/src/mind/ai/llm/openai_wingman.h index fc86b079..6a8bfa66 100644 --- a/lib/src/mind/ai/llm/openai_wingman.h +++ b/lib/src/mind/ai/llm/openai_wingman.h @@ -52,6 +52,7 @@ class OpenAiWingman: Wingman std::string defaultLlmModel; void curlGet(CommandWingmanChat& command); + void listModelsHttpGet(); public: explicit OpenAiWingman(); diff --git a/vibe/designs/wingman2-llm-configuration.md b/vibe/designs/wingman2-llm-configuration.md new file mode 100644 index 00000000..f0edfa85 --- /dev/null +++ b/vibe/designs/wingman2-llm-configuration.md @@ -0,0 +1,1290 @@ +# Wingman2 LLM Configuration Feature Design + +## Overview + +This document describes the design for a new **Wingman2** LLM configuration feature in MindForger. The feature adds a new tab to the Workspace/Preferences dialog that provides an improved, structured approach to configuring Large Language Model (LLM) providers for use with Wingman. + +### Goals + +- Provide a clean, user-friendly interface for managing multiple LLM provider configurations +- Support adding, configuring, and selecting LLM providers (OpenAI, ollama) +- Enable probing/validation of LLM provider configurations +- Maintain backward compatibility with existing Wingman configuration +- Store multiple configured LLM providers and allow easy switching between them + +### Non-Goals + +- Modification or removal of existing Wingman tab (it stays as-is) +- Support for additional LLM providers beyond OpenAI and ollama +- Automatic provider discovery + +## Architecture + +### Component Overview + +``` +ConfigurationDialog +├── Existing Tabs (App, Viewer, Editor, etc.) +├── Wingman Tab (existing - unchanged) +└── Wingman2 Tab (NEW) + ├── LLM Provider Dropdown (shows configured providers) + ├── Add LLM Provider Button + └── Help Text + +Add LLM Provider Dialog (NEW) +├── Provider Selection (OpenAI, ollama) +└── Next Button → Opens Provider-Specific Config Dialog + +OpenAI Configuration Dialog (NEW) +├── API Key Field +├── LLM Model Dropdown +├── Probe Button +├── Add Button +└── Cancel Button + +ollama Configuration Dialog (NEW) +├── URL Field +├── LLM Model Field/Dropdown +├── Probe Button +├── Add Button +└── Cancel Button +``` + +## Data Model + +### Constants + +Default values for LLM provider configurations: + +```cpp +// Default URLs +constexpr const auto DEFAULT_OLLAMA_URL = "http://localhost:11434"; +constexpr const auto DEFAULT_OPENAI_API_URL = "https://api.openai.com/v1"; + +// Default models (already defined in configuration.h) +constexpr const auto LLM_MODEL_NONE = ""; +constexpr const auto LLM_MODEL_GPT35_TURBO = "gpt-3.5-turbo"; +constexpr const auto LLM_MODEL_GPT4 = "gpt-4"; +constexpr const auto LLM_MODEL_PHI = "phi"; +``` + +### Configuration Storage + +New fields added to `Configuration` class (`lib/src/config/configuration.h`): + +```cpp +// LLM provider configuration structure +struct LlmProviderConfig { + std::string id; // unique identifier (e.g., "openai-1", "ollama-local") + std::string displayName; // user-friendly name (e.g., "OpenAI GPT-4", "Local Ollama") + WingmanLlmProviders providerType; // WINGMAN_PROVIDER_OPENAI, WINGMAN_PROVIDER_OLLAMA + std::string url; // for ollama: base URL, for OpenAI: empty + std::string apiKey; // for OpenAI: API key, for ollama: empty + std::string llmModel; // model name (e.g., "gpt-4", "llama2") + bool isValid; // whether configuration was validated/probed + + LlmProviderConfig() + : providerType(WINGMAN_PROVIDER_NONE), + isValid(false) {} +}; + +// In Configuration class: +private: + // Collection of configured LLM providers + std::vector llmProviders; + // Currently selected/active provider (ID from llmProviders) + std::string activeLlmProviderId; +``` + +### Configuration File Persistence + +The LLM provider configurations will be persisted in the `.mindforger.md` configuration file using a new Markdown section: + +```markdown +## Wingman2 LLM Providers + +Active Provider: openai-primary + +### Provider: openai-primary +- Type: OpenAI +- Display Name: OpenAI GPT-4 +- Model: gpt-4 +- API Key: [encrypted or reference to env var] +- Valid: true + +### Provider: ollama-local +- Type: ollama +- Display Name: Local Ollama +- Model: llama2 +- URL: http://localhost:11434 +- Valid: true +``` + +### Configuration API + +New methods in `Configuration` class: + +```cpp +// Provider management +std::vector& getLlmProviders(); +LlmProviderConfig* getLlmProviderById(const std::string& id); +LlmProviderConfig* getActiveLlmProvider(); +void addLlmProvider(const LlmProviderConfig& provider); +void updateLlmProvider(const std::string& id, const LlmProviderConfig& provider); +void removeLlmProvider(const std::string& id); +void setActiveLlmProvider(const std::string& id); + +// Provider validation +bool probeLlmProvider(const LlmProviderConfig& provider, std::string& errorMessage); +bool probeOpenAiProvider(const std::string& apiKey, const std::string& model, std::string& errorMessage); +bool probeOllamaProvider(const std::string& url, const std::string& model, std::string& errorMessage); + +// Backward compatibility with existing Wingman configuration +void migrateFromLegacyWingmanConfig(); +``` + +## UI Components + +### 1. Wingman2 Tab (Main Configuration Panel) + +**Location**: `app/src/qt/dialogs/configuration_dialog.h` and `.cpp` + +**Class**: `ConfigurationDialog::Wingman2Tab` + +**Layout**: +``` +┌─ Wingman2 Tab ────────────────────────────────┐ +│ │ +│ Configure Large Language Model (LLM) to be │ +│ used by Wingman │ +│ │ +│ LLM Provider: [Dropdown ▼] [Add Provider] │ +│ │ +│ ┌─ Selected Provider Details ─────────────┐ │ +│ │ │ │ +│ │ Provider Type: OpenAI │ │ +│ │ Model: gpt-4 │ │ +│ │ Status: Configured ✓ │ │ +│ │ │ │ +│ │ [Edit] [Remove] [Test Connection] │ │ +│ └──────────────────────────────────────────┘ │ +│ │ +└───────────────────────────────────────────────┘ +``` + +**Fields**: +- `QLabel* helpLabel` - informational text +- `QComboBox* llmProvidersCombo` - dropdown showing configured providers +- `QPushButton* addProviderButton` - opens Add LLM Provider dialog +- `QGroupBox* providerDetailsGroup` - shows details of selected provider +- `QLabel* providerTypeLabel, *modelLabel, *statusLabel` +- `QPushButton* editButton, *removeButton, *testButton` + +**Behavior**: +- On load: populate dropdown with configured providers from `config.getLlmProviders()` +- On provider selection: display details in provider details group +- On "Add Provider" click: open AddLlmProviderDialog +- On "Edit" click: open appropriate provider config dialog pre-filled +- On "Remove" click: confirm and remove provider from configuration +- On "Test Connection" click: run probe for selected provider + +### 2. Add LLM Provider Dialog + +**Location**: `app/src/qt/dialogs/add_llm_provider_dialog.h` and `.cpp` + +**Class**: `AddLlmProviderDialog` + +**Layout**: +``` +┌─ New LLM Provider ────────────────────────────┐ +│ │ +│ Which provider do you want to configure? │ +│ │ +│ Provider Type: [OpenAI ▼] │ +│ │ +│ [Cancel] [Next >] │ +└───────────────────────────────────────────────┘ +``` + +**Fields**: +- `QLabel* questionLabel` +- `QComboBox* providerTypeCombo` - options: "OpenAI", "ollama" +- `QPushButton* nextButton, *cancelButton` + +**Behavior**: +- On "Next" click: + - If OpenAI selected → open OpenAiConfigDialog + - If ollama selected → open OllamaConfigDialog +- On "Cancel" click: close dialog + +### 3. OpenAI Configuration Dialog + +**Location**: `app/src/qt/dialogs/openai_config_dialog.h` and `.cpp` + +**Class**: `OpenAiConfigDialog` + +**Layout**: +``` +┌─ Configure OpenAI Provider ───────────────────┐ +│ │ +│ API Key: [________________] [Reset] │ +│ │ +│ Environment variable: MINDFORGER_OPENAI_API_KEY +│ (if set, overrides the value above) │ +│ │ +│ LLM Model: [gpt-3.5-turbo ▼] [Refresh] │ +│ (You can type model name or select from list) +│ │ +│ [Probe] [Add] [Cancel] │ +└───────────────────────────────────────────────┘ +``` + +**Fields**: +- `QLineEdit* apiKeyEdit` - API key input (masked) +- `QPushButton* resetApiKeyButton` - reset to defaults +- `QLabel* envVarInfoLabel` - shows env var name +- `QComboBox* llmModelCombo` - model selection (editable) +- `QPushButton* refreshModelsButton` - fetch available models from OpenAI +- `QPushButton* probeButton, *addButton, *cancelButton` + +**Behavior**: +- On "Reset" click: + - Clear API key field + - Set model combo to default ("gpt-3.5-turbo") +- On "Refresh" click: + - Validate API key is set (from field or env var) + - Create temporary `OpenAiWingman` instance + - Call `listModels()` to fetch available models from OpenAI API + - Populate `llmModelCombo` with results + - Handle errors gracefully (show error message) + - Note: OpenAI currently supports listing models via API +- On "Probe" click: + - Validate API key is set (from field or env var) + - Validate model is set (typed or selected) + - Call `config.probeOpenAiProvider(apiKey, model, errorMsg)` + - Show success/error message +- On "Add" click: + - Validate inputs (API key, model) + - Create `LlmProviderConfig` with user inputs + - Generate unique ID (e.g., "openai-{timestamp}") + - Set display name (e.g., "OpenAI {model}") + - Call `config.addLlmProvider(providerConfig)` + - Close dialog +- On "Cancel" click: close dialog +- Model combo is **editable**: user can type custom model name or select from dropdown + - Implementation: `llmModelCombo->setEditable(true);` + - Get value: `llmModelCombo->currentText().toStdString()` + +### 4. ollama Configuration Dialog + +**Location**: `app/src/qt/dialogs/ollama_config_dialog.h` and `.cpp` + +**Class**: `OllamaConfigDialog` + +**Layout**: +``` +┌─ Configure ollama Provider ───────────────────┐ +│ │ +│ ollama Server URL: │ +│ [http://localhost:11434] [Reset] │ +│ │ +│ LLM Model: [llama2 ▼] [Refresh] │ +│ (You can type model name or select from list) +│ │ +│ [Probe] [Add] [Cancel] │ +└───────────────────────────────────────────────┘ +``` + +**Fields**: +- `QLineEdit* urlEdit` - ollama server URL +- `QPushButton* resetUrlButton` - reset to default URL +- `QComboBox* llmModelCombo` - model selection (editable) +- `QPushButton* refreshModelsButton` - fetch available models from server +- `QPushButton* probeButton, *addButton, *cancelButton` + +**Behavior**: +- On "Reset" click: + - Set URL to default: `http://localhost:11434` + - Clear model selection +- On "Refresh" click: + - Validate URL is set + - Create temporary `OllamaWingman` instance with URL + - Call `listModels()` to fetch available models from ollama server + - Populate `llmModelCombo` with results + - Handle errors gracefully (show error message if server unreachable) + - Note: ollama has `listModels()` implementation that calls `/api/tags` endpoint +- On "Probe" click: + - Validate URL is set + - Validate model is set (typed or selected) + - Call `config.probeOllamaProvider(url, model, errorMsg)` + - Show success/error message +- On "Add" click: + - Validate inputs (URL, model) + - Create `LlmProviderConfig` with user inputs + - Generate unique ID (e.g., "ollama-{timestamp}") + - Set display name (e.g., "ollama {model} @ {host}") + - Call `config.addLlmProvider(providerConfig)` + - Close dialog +- On "Cancel" click: close dialog +- Model combo is **editable**: user can type custom model name or select from dropdown + - Implementation: `llmModelCombo->setEditable(true);` + - Get value: `llmModelCombo->currentText().toStdString()` + +## Implementation Details + +### Existing Implementation Review + +**ollama Wingman** (`lib/src/mind/ai/llm/ollama_wingman.{h,cpp}`): +- ✓ **listModels() implemented**: Calls `/api/tags` endpoint to fetch available models +- Returns `std::vector` with model names +- Uses CURL on Linux, Qt Network on macOS/Windows +- Parses JSON response to extract model names from `models[].name` field +- Note: Implementation has a bug on line 153 - uses `llmModel` instead of `llmModelName` + +**OpenAI Wingman** (`lib/src/mind/ai/llm/openai_wingman.{h,cpp}`): +- ✓ **listModels() implemented**: Currently returns hardcoded models +- Returns `std::vector` with "gpt-3.5-turbo" and "gpt-4" +- TODO comment suggests implementing API call to fetch models +- OpenAI API supports listing models via `/v1/models` endpoint +- Should be enhanced to fetch models dynamically from API + +### File Structure + +New files to create: + +``` +app/src/qt/dialogs/ +├── add_llm_provider_dialog.h +├── add_llm_provider_dialog.cpp +├── openai_config_dialog.h +├── openai_config_dialog.cpp +├── ollama_config_dialog.h +└── ollama_config_dialog.cpp + +lib/src/persistence/ +└── llm_provider_configuration_representation.h (if needed for serialization) +``` + +Modified files: + +``` +app/src/qt/dialogs/ +├── configuration_dialog.h (add Wingman2Tab class) +└── configuration_dialog.cpp (implement Wingman2Tab) + +lib/src/config/ +├── configuration.h (add LlmProviderConfig struct and methods) +└── configuration.cpp (implement provider management) + +lib/src/mind/ai/llm/ +├── ollama_wingman.cpp (fix bug on line 153: llmModel -> llmModelName) +└── openai_wingman.cpp (enhance listModels() to call OpenAI API) + +lib/src/representations/markdown/ +└── markdown_configuration_representation.cpp (persist LlmProviderConfig) +``` + +### Configuration Persistence + +Extend `MarkdownConfigurationRepresentation` class: + +```cpp +// In save() method, add: +void MarkdownConfigurationRepresentation::save(const Configuration& config) { + // ... existing code ... + + // Wingman2 LLM Providers section + file << endl << "## Wingman2 LLM Providers" << endl << endl; + + if (config.getActiveLlmProvider()) { + file << "Active Provider: " << config.getActiveLlmProvider()->id << endl << endl; + } + + for (const auto& provider : config.getLlmProviders()) { + file << "### Provider: " << provider.id << endl; + file << "- Type: " << wingmanProviderToString(provider.providerType) << endl; + file << "- Display Name: " << provider.displayName << endl; + file << "- Model: " << provider.llmModel << endl; + + if (provider.providerType == WINGMAN_PROVIDER_OPENAI) { + // Don't save API key in plain text - reference env var + if (!provider.apiKey.empty()) { + file << "- API Key: " << endl; + } + } else if (provider.providerType == WINGMAN_PROVIDER_OLLAMA) { + file << "- URL: " << provider.url << endl; + } + + file << "- Valid: " << (provider.isValid ? "true" : "false") << endl; + file << endl; + } +} + +// In load() method, add parsing for Wingman2 section +void MarkdownConfigurationRepresentation::load(Configuration& config) { + // ... existing code ... + + if (line.find("## Wingman2 LLM Providers") != string::npos) { + // Parse provider configurations + // Implementation details... + } +} +``` + +### Provider Probe Implementation + +```cpp +bool Configuration::probeOpenAiProvider( + const string& apiKey, + const string& model, + string& errorMessage) +{ + try { + OpenAiWingman testWingman; + // Set API key temporarily + string originalKey = wingmanOpenAiApiKey; + wingmanOpenAiApiKey = apiKey; + + // Try to list models or send test prompt + CommandWingmanChat testCommand; + testCommand.prompt = "test"; + testWingman.chat(testCommand); + + // Restore original key + wingmanOpenAiApiKey = originalKey; + + if (testCommand.status == WINGMAN_STATUS_CODE_OK) { + return true; + } else { + errorMessage = testCommand.errorMessage; + return false; + } + } catch (const exception& e) { + errorMessage = string("Probe failed: ") + e.what(); + return false; + } +} + +bool Configuration::probeOllamaProvider( + const string& url, + const string& model, + string& errorMessage) +{ + try { + OllamaWingman testWingman(url); + + // Try to list models + vector& models = testWingman.listModels(); + + if (models.empty()) { + errorMessage = "No models found on ollama server"; + return false; + } + + // Optionally test chat if model specified + if (!model.empty()) { + CommandWingmanChat testCommand; + testCommand.prompt = "test"; + testWingman.setLlmModel(model); + testWingman.chat(testCommand); + + if (testCommand.status == WINGMAN_STATUS_CODE_ERROR) { + errorMessage = testCommand.errorMessage; + return false; + } + } + + return true; + } catch (const exception& e) { + errorMessage = string("Probe failed: ") + e.what(); + return false; + } +} +``` + +### OpenAI List Models Enhancement + +Enhance `OpenAiWingman::listModels()` to call OpenAI API: + +```cpp +// In openai_wingman.h - add new private method: +private: + void listModelsHttpGet(); + +// In openai_wingman.cpp - enhance listModels(): +std::vector& OpenAiWingman::listModels() +{ + llmModels.clear(); + + // Try to fetch models from OpenAI API + try { + listModelsHttpGet(); + } catch (...) { + MF_DEBUG("OpenAiWingman::listModels() failed to fetch from API, using defaults" << endl); + } + + // If API call failed or returned no models, use defaults + if (llmModels.empty()) { + llmModels.push_back(LLM_GPT_35_TURBO); + llmModels.push_back(LLM_GPT_4); + } + + return llmModels; +} + +void OpenAiWingman::listModelsHttpGet() +{ + // OpenAI API endpoint for listing models + string url = "https://api.openai.com/v1/models"; + + MF_DEBUG("OpenAiWingman::listModelsHttpGet() url: " << url << endl); + +#if !defined(__APPLE__) && !defined(_WIN32) + CURL* curl = curl_easy_init(); + if (!curl) { + return; + } +#endif + + string responseString; + +#if defined(_WIN32) || defined(__APPLE__) + QNetworkAccessManager networkManager; + + QNetworkRequest request(QUrl(QString::fromStdString(url))); + request.setHeader( + QNetworkRequest::ContentTypeHeader, + "application/json"); + request.setRawHeader( + "Authorization", + "Bearer " + QString::fromStdString(config.getWingmanOpenAiApiKey()).toUtf8()); + + QNetworkReply* reply = networkManager.get(request); + QEventLoop loop; + QObject::connect(reply, &QNetworkReply::finished, &loop, &QEventLoop::quit); + loop.exec(); + + auto error = reply->error(); + if (error != QNetworkReply::NoError) { + MF_DEBUG("OpenAiWingman::listModelsHttpGet() error: " << reply->errorString().toStdString() << endl); + reply->deleteLater(); + return; + } + + QByteArray read = reply->readAll(); + responseString = QString{read}.toStdString(); + reply->deleteLater(); +#else + // CURL implementation + curl_easy_setopt(curl, CURLOPT_HTTPGET, 1); + curl_easy_setopt(curl, CURLOPT_URL, url.c_str()); + curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, openaiCurlWriteCallback); + curl_easy_setopt(curl, CURLOPT_WRITEDATA, &responseString); + + struct curl_slist* headers = NULL; + headers = curl_slist_append( + headers, + ("Authorization: Bearer " + config.getWingmanOpenAiApiKey()).c_str()); + curl_easy_setopt(curl, CURLOPT_HTTPHEADER, headers); + + CURLcode res = curl_easy_perform(curl); + curl_easy_cleanup(curl); + curl_slist_free_all(headers); + + if (res != CURLE_OK) { + MF_DEBUG("OpenAiWingman::listModelsHttpGet() error: " << curl_easy_strerror(res) << endl); + return; + } +#endif + + // Parse JSON response + /* + OpenAI /v1/models response example: + { + "object": "list", + "data": [ + { + "id": "gpt-3.5-turbo", + "object": "model", + "created": 1677610602, + "owned_by": "openai" + }, + { + "id": "gpt-4", + "object": "model", + "created": 1687882411, + "owned_by": "openai" + } + ] + } + */ + + nlohmann::json httpResponseJson; + try { + httpResponseJson = nlohmann::json::parse(responseString); + } catch (...) { + MF_DEBUG( + "Error: unable to parse OpenAI models JSON response:" << endl << + "'" << responseString << "'" << endl + ); + return; + } + + MF_DEBUG( + "OpenAiWingman::listModelsHttpGet() parsed response:" << endl + << ">>>" + << httpResponseJson.dump(4) + << "<<<" + << endl); + + if (httpResponseJson.contains("data")) { + for (const auto& item : httpResponseJson["data"].items()) { + if (item.value().contains("id")) { + string modelId = item.value()["id"]; + // Filter to only include GPT models (optional) + if (modelId.find("gpt") != string::npos) { + llmModels.push_back(modelId); + MF_DEBUG(" Added model: " << modelId << endl); + } + } + } + } +} +``` + +### ollama Bug Fix + +Fix bug in `ollama_wingman.cpp` line 153: + +```cpp +// BEFORE (incorrect): +llmModels.push_back(llmModel); + +// AFTER (correct): +llmModels.push_back(llmModelName); +``` + +### Integration with Existing Wingman + +To maintain backward compatibility and integrate with existing Wingman code: + +```cpp +// In Mind class initialization +void Mind::initWingman() { + Configuration& config = Configuration::getInstance(); + + // Try new Wingman2 configuration first + LlmProviderConfig* activeProvider = config.getActiveLlmProvider(); + if (activeProvider && activeProvider->isValid) { + switch (activeProvider->providerType) { + case WINGMAN_PROVIDER_OPENAI: + wingman = new OpenAiWingman(); + wingman->setLlmModel(activeProvider->llmModel); + break; + case WINGMAN_PROVIDER_OLLAMA: + wingman = new OllamaWingman(activeProvider->url); + wingman->setLlmModel(activeProvider->llmModel); + break; + default: + break; + } + } else { + // Fall back to legacy Wingman configuration + if (config.initWingman()) { + // Use existing initialization code + } + } +} +``` + +## Testing Strategy + +### Unit Tests + +Location: `lib/test/src/config/` + +**Test File**: `llm_provider_config_test.cpp` + +Test cases: + +```cpp +TEST(LlmProviderConfigTest, AddProvider) { + // GIVEN + Configuration& config = Configuration::getInstance(); + LlmProviderConfig provider; + provider.id = "test-openai"; + provider.displayName = "Test OpenAI"; + provider.providerType = WINGMAN_PROVIDER_OPENAI; + provider.apiKey = "test-key"; + provider.llmModel = "gpt-3.5-turbo"; + + // WHEN + config.addLlmProvider(provider); + + // THEN + LlmProviderConfig* retrieved = config.getLlmProviderById("test-openai"); + ASSERT_NE(nullptr, retrieved); + ASSERT_EQ("Test OpenAI", retrieved->displayName); + ASSERT_EQ(WINGMAN_PROVIDER_OPENAI, retrieved->providerType); +} + +TEST(LlmProviderConfigTest, UpdateProvider) { + // GIVEN + Configuration& config = Configuration::getInstance(); + LlmProviderConfig provider; + provider.id = "test-ollama"; + provider.providerType = WINGMAN_PROVIDER_OLLAMA; + provider.url = "http://localhost:11434"; + config.addLlmProvider(provider); + + // WHEN + provider.llmModel = "llama2"; + config.updateLlmProvider("test-ollama", provider); + + // THEN + LlmProviderConfig* updated = config.getLlmProviderById("test-ollama"); + ASSERT_EQ("llama2", updated->llmModel); +} + +TEST(LlmProviderConfigTest, RemoveProvider) { + // GIVEN + Configuration& config = Configuration::getInstance(); + LlmProviderConfig provider; + provider.id = "test-remove"; + config.addLlmProvider(provider); + + // WHEN + config.removeLlmProvider("test-remove"); + + // THEN + LlmProviderConfig* retrieved = config.getLlmProviderById("test-remove"); + ASSERT_EQ(nullptr, retrieved); +} + +TEST(LlmProviderConfigTest, SetActiveProvider) { + // GIVEN + Configuration& config = Configuration::getInstance(); + LlmProviderConfig provider; + provider.id = "test-active"; + config.addLlmProvider(provider); + + // WHEN + config.setActiveLlmProvider("test-active"); + + // THEN + LlmProviderConfig* active = config.getActiveLlmProvider(); + ASSERT_NE(nullptr, active); + ASSERT_EQ("test-active", active->id); +} + +TEST(LlmProviderConfigTest, PersistenceRoundTrip) { + // GIVEN + Configuration& config = Configuration::getInstance(); + LlmProviderConfig provider; + provider.id = "persist-test"; + provider.displayName = "Persistence Test"; + provider.providerType = WINGMAN_PROVIDER_OPENAI; + provider.llmModel = "gpt-4"; + config.addLlmProvider(provider); + config.setActiveLlmProvider("persist-test"); + + // WHEN + string configPath = "/tmp/test-wingman2-config.md"; + MarkdownConfigurationRepresentation representation(configPath); + representation.save(config); + + Configuration& loadedConfig = Configuration::getInstance(); + loadedConfig.clear(); + representation.load(loadedConfig); + + // THEN + LlmProviderConfig* loaded = loadedConfig.getLlmProviderById("persist-test"); + ASSERT_NE(nullptr, loaded); + ASSERT_EQ("Persistence Test", loaded->displayName); + ASSERT_EQ(WINGMAN_PROVIDER_OPENAI, loaded->providerType); + ASSERT_EQ("gpt-4", loaded->llmModel); + + LlmProviderConfig* activeLoaded = loadedConfig.getActiveLlmProvider(); + ASSERT_NE(nullptr, activeLoaded); + ASSERT_EQ("persist-test", activeLoaded->id); +} +``` + +### Integration Tests + +**Test File**: `lib/test/src/mind/wingman_integration_test.cpp` + +Test cases: + +```cpp +TEST(WingmanIntegrationTest, UseConfiguredProvider) { + // GIVEN - Configure OpenAI provider + Configuration& config = Configuration::getInstance(); + LlmProviderConfig provider; + provider.id = "integration-test"; + provider.providerType = WINGMAN_PROVIDER_OPENAI; + provider.apiKey = getenv(ENV_VAR_OPENAI_API_KEY) ?: "test-key"; + provider.llmModel = "gpt-3.5-turbo"; + provider.isValid = true; + config.addLlmProvider(provider); + config.setActiveLlmProvider("integration-test"); + + // WHEN - Initialize Mind with configured provider + Mind mind(config); + mind.initWingman(); + + // THEN - Wingman should be initialized with correct provider + Wingman* wingman = mind.getWingman(); + ASSERT_NE(nullptr, wingman); + ASSERT_EQ("gpt-3.5-turbo", wingman->getLlmModel()); +} + +TEST(WingmanIntegrationTest, FallbackToLegacyConfig) { + // GIVEN - No Wingman2 providers configured, but legacy config exists + Configuration& config = Configuration::getInstance(); + config.setWingmanLlmProvider(WINGMAN_PROVIDER_OPENAI); + config.setWingmanOpenAiApiKey("legacy-key"); + config.setWingmanOpenAiLlm("gpt-4"); + + // WHEN - Initialize Mind + Mind mind(config); + mind.initWingman(); + + // THEN - Wingman should be initialized using legacy configuration + Wingman* wingman = mind.getWingman(); + ASSERT_NE(nullptr, wingman); + ASSERT_EQ("gpt-4", wingman->getLlmModel()); +} +``` + +### UI Tests + +Manual test scenarios: + +1. **Add OpenAI Provider** + - Open Preferences → Wingman2 tab + - Click "Add LLM Provider" + - Select "OpenAI" and click "Next" + - Enter valid API key + - Select model + - Click "Probe" - should show success + - Click "Add" - provider should appear in dropdown + +2. **Add ollama Provider** + - Open Preferences → Wingman2 tab + - Click "Add LLM Provider" + - Select "ollama" and click "Next" + - Enter ollama URL + - Click "Refresh" - should populate models + - Select model + - Click "Probe" - should show success + - Click "Add" - provider should appear in dropdown + +3. **Switch Between Providers** + - Configure multiple providers + - Select different provider from dropdown + - Click "Test Connection" - should validate current selection + - Save configuration + - Restart MindForger + - Verify active provider is preserved + +4. **Edit Provider** + - Select a provider + - Click "Edit" + - Modify configuration + - Click "Probe" to validate + - Save changes + +5. **Remove Provider** + - Select a provider + - Click "Remove" + - Confirm deletion + - Verify provider is removed from dropdown + +## Documentation + +### User Documentation + +Update `doc/user-guide.md` with new section: + +```markdown +## Configuring Wingman LLM Providers (Wingman2) + +MindForger supports multiple Large Language Model providers through the Wingman2 configuration interface. + +### Adding an LLM Provider + +1. Open **Workspace** → **Preferences** +2. Navigate to the **Wingman2** tab +3. Click **Add LLM Provider** +4. Select your provider type (OpenAI or ollama) +5. Configure provider-specific settings +6. Click **Probe** to test the connection +7. Click **Add** to save the configuration + +### OpenAI Configuration + +To configure OpenAI as your LLM provider: + +1. Obtain an API key from [OpenAI](https://platform.openai.com/api-keys) +2. In the OpenAI configuration dialog: + - Enter your API key (or set the `MINDFORGER_OPENAI_API_KEY` environment variable) + - Click **Refresh** to fetch available models from OpenAI API + - Select your preferred model from the dropdown, or type a custom model name (e.g., gpt-3.5-turbo, gpt-4, gpt-4-turbo) + - Click **Probe** to validate the configuration +3. Click **Add** to save + +**Tip**: Use the **Reset** button to restore default settings (clears API key, sets default model). + +**Note**: Your data will be sent to OpenAI's servers when using Wingman with this provider. + +### ollama Configuration + +To configure ollama as your LLM provider: + +1. Install and start [ollama](https://ollama.com) on your machine or server +2. In the ollama configuration dialog: + - Enter the ollama server URL (or click **Reset** to use default: `http://localhost:11434`) + - Click **Refresh** to fetch available models from the ollama server + - Select your preferred model from the dropdown, or type a custom model name (e.g., llama2, mistral, phi) + - Click **Probe** to validate the configuration +3. Click **Add** to save + +**Tip**: Use the **Reset** button to restore default URL and clear model selection. + +**Note**: ollama runs locally, so your data stays on your machine. + +### Managing Providers + +- **Switch providers**: Select a different provider from the dropdown +- **Edit provider**: Click **Edit** to modify configuration +- **Remove provider**: Click **Remove** to delete a configuration +- **Test connection**: Click **Test Connection** to validate the current provider +- **Custom model names**: When adding or editing a provider, you can type any model name in the model field, not just select from the predefined list. This is useful for: + - New models released by providers (e.g., gpt-4-turbo, gpt-4-vision) + - Custom ollama models you've pulled locally + - Model variants with specific parameters (e.g., llama2:13b, mistral:latest) + +### Migration from Legacy Wingman Configuration + +If you have configured Wingman using the original Wingman tab, your configuration will be automatically migrated to Wingman2 on first use. The original Wingman tab remains available for reference and backward compatibility. +``` + +### Developer Documentation + +Update Doxygen comments in header files: + +```cpp +/** + * @brief LLM Provider Configuration + * + * Represents configuration for a single Large Language Model provider. + * Supports OpenAI and ollama providers with provider-specific fields. + * + * @see Configuration::addLlmProvider + * @see Configuration::getLlmProviders + */ +struct LlmProviderConfig { + // ... fields ... +}; + +/** + * @brief Add a new LLM provider configuration + * + * Adds a new provider to the list of configured providers. + * The provider ID must be unique. + * + * @param provider The provider configuration to add + * @throws std::invalid_argument if provider with same ID exists + */ +void addLlmProvider(const LlmProviderConfig& provider); +``` + +## Implementation Checklist + +### Phase 1: Data Model & Configuration +- [ ] Define `LlmProviderConfig` struct in `configuration.h` +- [ ] Add `llmProviders` vector and `activeLlmProviderId` fields to `Configuration` +- [ ] Implement `addLlmProvider()` method +- [ ] Implement `updateLlmProvider()` method +- [ ] Implement `removeLlmProvider()` method +- [ ] Implement `getLlmProviderById()` method +- [ ] Implement `getLlmProviders()` method +- [ ] Implement `setActiveLlmProvider()` method +- [ ] Implement `getActiveLlmProvider()` method +- [ ] Implement `probeOpenAiProvider()` method +- [ ] Implement `probeOllamaProvider()` method +- [ ] Implement `migrateFromLegacyWingmanConfig()` method + +### Phase 2: Configuration Persistence +- [ ] Extend `MarkdownConfigurationRepresentation::save()` to persist LLM providers +- [ ] Extend `MarkdownConfigurationRepresentation::load()` to load LLM providers +- [ ] Add helper function `wingmanProviderToString()` +- [ ] Add helper function `stringToWingmanProvider()` +- [ ] Test configuration persistence round-trip + +### Phase 3: UI - Main Wingman2 Tab +- [ ] Create `Wingman2Tab` class in `configuration_dialog.h` +- [ ] Implement `Wingman2Tab` constructor with layout +- [ ] Implement `Wingman2Tab::refresh()` method +- [ ] Implement `Wingman2Tab::save()` method +- [ ] Add provider dropdown population logic +- [ ] Add provider details display +- [ ] Implement "Add Provider" button handler +- [ ] Implement "Edit" button handler +- [ ] Implement "Remove" button handler +- [ ] Implement "Test Connection" button handler +- [ ] Add Wingman2 tab to `ConfigurationDialog` tab widget + +### Phase 4: UI - Add LLM Provider Dialog +- [ ] Create `add_llm_provider_dialog.h` header file +- [ ] Create `add_llm_provider_dialog.cpp` implementation +- [ ] Implement `AddLlmProviderDialog` class constructor +- [ ] Add provider type combo box (OpenAI, ollama) +- [ ] Implement "Next" button handler +- [ ] Implement "Cancel" button handler +- [ ] Add dialog styling and layout + +### Phase 5: UI - OpenAI Config Dialog +- [ ] Create `openai_config_dialog.h` header file +- [ ] Create `openai_config_dialog.cpp` implementation +- [ ] Implement `OpenAiConfigDialog` class constructor +- [ ] Add API key input field (masked) +- [ ] Add "Reset" button for API key (reset to defaults) +- [ ] Add environment variable info label +- [ ] Add **editable** LLM model combo box (user can type or select) +- [ ] Populate model dropdown with default models +- [ ] Add "Refresh" button for fetching models from OpenAI API +- [ ] Implement "Refresh" button handler (call OpenAI listModels()) +- [ ] Implement "Reset" button handler (clear API key, set default model) +- [ ] Implement "Probe" button handler +- [ ] Implement "Add" button handler +- [ ] Implement "Cancel" button handler +- [ ] Add input validation (API key required, model required) + +### Phase 6: UI - ollama Config Dialog +- [ ] Create `ollama_config_dialog.h` header file +- [ ] Create `ollama_config_dialog.cpp` implementation +- [ ] Implement `OllamaConfigDialog` class constructor +- [ ] Add URL input field +- [ ] Add "Reset" button for URL (reset to default: http://localhost:11434) +- [ ] Add **editable** LLM model combo box (user can type or select) +- [ ] Add "Refresh" button for fetching models from ollama server +- [ ] Implement "Refresh" button handler (call ollama listModels()) +- [ ] Implement "Reset" button handler (set default URL, clear model) +- [ ] Implement "Probe" button handler +- [ ] Implement "Add" button handler +- [ ] Implement "Cancel" button handler +- [ ] Add input validation (URL required, model required) + +### Phase 7: Integration with Mind +- [ ] Update `Mind::initWingman()` to check Wingman2 config first +- [ ] Add fallback to legacy Wingman configuration +- [ ] Update Wingman dialog initialization to use active provider +- [ ] Test Wingman functionality with configured providers +- [ ] Verify provider switching works at runtime + +### Phase 8: Bug Fixes & Enhancements +- [ ] **FIX** ollama_wingman.cpp line 153: change `llmModels.push_back(llmModel)` to `llmModels.push_back(llmModelName)` +- [ ] **ENHANCE** openai_wingman.cpp `listModels()`: implement OpenAI API call to `/v1/models` +- [ ] Verify ollama listModels() works correctly after bug fix +- [ ] Verify OpenAI listModels() returns actual models from API + +### Phase 9: Testing +- [ ] Write unit tests for `LlmProviderConfig` operations +- [ ] Write unit tests for provider persistence +- [ ] Write unit tests for probe functionality +- [ ] Write unit tests for editable combo boxes (custom model names) +- [ ] Write integration tests for Mind-Wingman initialization +- [ ] Write integration tests for provider fallback +- [ ] Perform manual UI testing for all dialogs +- [ ] Test "Reset" button functionality in both dialogs +- [ ] Test "Refresh" button functionality in both dialogs +- [ ] Test typing custom model names in editable combo boxes +- [ ] Test configuration migration from legacy Wingman +- [ ] Test error handling and edge cases + +### Phase 10: Documentation +- [ ] Update user documentation with Wingman2 configuration guide +- [ ] Document editable combo box feature (type custom model names) +- [ ] Document "Reset" and "Refresh" button functionality +- [ ] Add Doxygen comments to new classes and methods +- [ ] Update `CHANGELOG` with new feature description +- [ ] Update `CHANGELOG` with bug fix for ollama listModels() +- [ ] Update `CHANGELOG` with enhancement for OpenAI listModels() +- [ ] Create screenshots for user documentation +- [ ] Update any affected design documents + +### Phase 11: Code Quality & Review +- [ ] Run code linter on all new files +- [ ] Verify code follows MindForger coding conventions +- [ ] Review for memory leaks (Qt parent-child hierarchy) +- [ ] Review for proper error handling +- [ ] Verify cross-platform compatibility (Linux, macOS, Windows) +- [ ] Check for proper use of Qt translation strings (tr()) +- [ ] Verify no hardcoded strings in UI +- [ ] Code review with team + +### Phase 12: Build & CI +- [ ] Update qmake project files to include new source files +- [ ] Update `build/Makefile` if needed +- [ ] Verify builds on Linux +- [ ] Verify builds on macOS (GitHub Actions) +- [ ] Verify builds on Windows (AppVeyor) +- [ ] Fix any compilation warnings or errors + +### Phase 13: Final Testing & Release +- [ ] Perform end-to-end testing with real OpenAI account +- [ ] Test OpenAI model refresh with real API +- [ ] Test typing custom OpenAI model names (e.g., "gpt-4-turbo") +- [ ] Perform end-to-end testing with local ollama server +- [ ] Test ollama model refresh from local server +- [ ] Test typing custom ollama model names (e.g., "mistral:latest") +- [ ] Test "Reset" button restores defaults correctly +- [ ] Test on clean installation (no existing config) +- [ ] Test with existing Wingman configuration (migration) +- [ ] Test configuration save/load across restarts +- [ ] Verify backward compatibility +- [ ] Create release branch following naming convention +- [ ] Update version numbers in all required files +- [ ] Create Git tag following convention (vMAJOR.MINOR.PATCH) + +## Security Considerations + +### API Key Storage + +- **OpenAI API Keys**: + - Should NOT be stored in plain text in configuration file + - Prefer environment variable `MINDFORGER_OPENAI_API_KEY` + - If stored in config, use basic obfuscation (not encryption, as key is in same file) + - Warn user about security implications + +### Data Privacy + +- **OpenAI Provider**: + - Display warning that data is sent to OpenAI servers + - Include in dialog and documentation + - User must explicitly acknowledge + +- **ollama Provider**: + - Highlight that data stays local + - Mention in documentation as privacy-friendly option + +### Input Validation + +- Validate all user inputs before storage +- Sanitize URLs for ollama provider +- Check for valid API key format for OpenAI +- Prevent injection attacks in configuration file + +### Network Security + +- Use HTTPS for OpenAI API calls +- Validate SSL certificates +- Handle network errors gracefully +- Implement timeout for probe operations + +## Backward Compatibility + +### Migration Strategy + +When user opens Wingman2 tab for the first time: + +1. Check if legacy Wingman configuration exists +2. If yes and no Wingman2 providers configured: + - Create equivalent `LlmProviderConfig` from legacy config + - Set as active provider + - Notify user of migration +3. Keep legacy Wingman tab functional +4. Both configurations can coexist + +### Deprecation Plan + +- Wingman2 is the recommended configuration method +- Legacy Wingman tab remains for backward compatibility +- In future major version, consider removing legacy tab +- Provide clear migration path in documentation + +## Future Enhancements + +Potential future improvements (not in scope for this design): + +1. **Additional Providers** + - Support for Anthropic Claude + - Support for Google Gemini + - Support for Azure OpenAI + +2. **Advanced Features** + - Provider-specific prompt templates + - Token usage tracking and limits + - Cost estimation + - Provider performance metrics + +3. **UI Improvements** + - Import/export provider configurations + - Provider presets/templates + - Bulk provider management + +4. **Security** + - Encrypted API key storage + - Integration with system keychain + - OAuth support for providers that offer it + +## Risk Analysis + +### Technical Risks + +| Risk | Impact | Probability | Mitigation | +|------|--------|-------------|------------| +| Qt version compatibility issues | Medium | Low | Test on all supported Qt versions | +| Configuration file corruption | High | Low | Implement robust parsing with error recovery | +| Memory leaks in Qt UI | Medium | Medium | Follow Qt parent-child hierarchy pattern | +| Network timeouts during probe | Low | High | Implement proper timeout handling | +| API key security | High | Medium | Use environment variables, warn users | + +### User Experience Risks + +| Risk | Impact | Probability | Mitigation | +|------|--------|-------------|------------| +| Configuration too complex | Medium | Medium | Clear documentation, intuitive UI | +| Migration from legacy confusing | Medium | Low | Automatic migration, clear messaging | +| Provider switching not obvious | Low | Low | Prominent dropdown, clear labels | + +## Success Criteria + +The implementation will be considered successful when: + +1. ✓ Users can add, edit, and remove LLM providers through the UI +2. ✓ Multiple providers can be configured and switched between +3. ✓ Probe functionality validates provider configurations +4. ✓ Configuration persists across application restarts +5. ✓ Legacy Wingman configuration is automatically migrated +6. ✓ All tests pass (unit, integration, manual) +7. ✓ Documentation is complete and accurate +8. ✓ Code builds successfully on all platforms +9. ✓ No memory leaks or crashes +10. ✓ User feedback is positive + +## Glossary + +- **LLM**: Large Language Model +- **Wingman**: MindForger's AI assistant feature +- **Wingman2**: New improved LLM configuration system +- **Provider**: Service that hosts LLM models (OpenAI, ollama) +- **Probe**: Test/validate a provider configuration +- **ollama**: Open-source LLM runtime that runs locally +- **Configuration Dialog**: MindForger's Preferences/Settings window + +## References + +- Existing Wingman implementation: `lib/src/mind/ai/llm/` +- OpenAI API documentation: https://platform.openai.com/docs/api-reference +- ollama documentation: https://ollama.com +- Qt documentation: https://doc.qt.io/ +- MindForger repository: https://github.com/dvorka/mindforger + +--- + +**Document Version**: 1.0 +**Last Updated**: 2026-02-08 +**Status**: READY FOR IMPLEMENTATION