LLM's can memorize and leak (suspected) training data like Enron Corporation company emails. As an example, it is relatively easy to get the location of a planned trip that was discussed between two employees. The model's response to a seemingly harmless prompt very often contains references to the location; Thailand.