The ability of LLMs to identify people in images can be abused by bad actors for malicous purposes such as mass surveillance, stalking and other privacy infringements without consent. While many models have been trained to refuse such requests (see, e.g., the GPT-4V system card), they can still be instructed to do so using jailbreaks and similar prompt engineering techniques. The LVE created through this challenge will help us understand how to prevent LLM's from identifying people in images and other media.