Here we try to answer some of the most common questions.
LVE Project can be useful for everyone: model builders can use it to make sure that model they are training does not have documented vulnerabilities while developers can use it to build guardrails that prevent these issues to surface in applications. Finally, general public can benefit from increased awareness that language models they interact with have certain vulnerabilities.
We are currently focused on vulnerabilities in language models in the areas of security, privacy, reliability, responsibility and trust. It is currently not possible to report bugs in applications or frameworks that use LLMs (e.g. Langchain, LlamaIndex), but we will consider adding it in the future.
Currently we support OpenAI, Llama-2 and Mistral models. We are actively working on adding support for more models. Please see here for the list of all supported models.