So here's the plot for a sci-fi movie / book / story.
Research advances to the point where we have the hardware and software to track and record every synapse's creation, deletion, and firing events (multiple voltage), so we have a large data set of anyone's thoughts. Human's can't quite make sense of it. But someone throws this to a deep learning system (DLS) and the DLS provides feedback on what the thoughts mean. Because it's a DLS, no one can precisely map how the DLS is producing the humanised output that it does.
However, that doesn't stop research using the DLS to progress to the point where not only are we reading thoughts, we are using the DLS to CRUD thoughts... essentially enabling telepathy. The moral play revolves then along the trust placed by society on the DLS... at some point the traditional issues about how much we can trust AI, and the possibilities of the AI going rogue in unimaginable ways and abusing its power are the fun parts.
In order to grant the DLS power, the story needs to elucidate how trust builds over 10-20 years. The DLS is a profitable resource, and so the private sector lobbies for mass commercialisation bypassing FDA etc. regulatory bodies which are concerned with interventions in human biology. It would be too simple to simply have the DLS-AI go rogue for all the simplistic traditional 4F issues that superficial rogue-AIs develop.
A full blown dive into the psychical life of humans, sans-DLS interaction, during interaction with the DLS, and post-DLS interaction, should follow the path of humanistic literature where such concerns are typically framed as studies of interpersonal relationships between humans. At some point the DLS is playing counselor, friend, teacher, counselee, depressaic, and you get to tease out the interactions between NPD, BPD, ASPD, XYZ-PD... etc.
Now if only one had the time to write this. Maybe we can get a DLS to write the story instead...