Dataflow programming languages propose to isolate some local behaviors in so called "actors", that are supposed to run in parallel and exchange data through point-to-point channels. There is no notion of central memory (both for code and data) unlike the Von Neumann model of computers.
These actors consume data tokens on their inputs and produce new data on their outputs.
This definition does not impose the means to run this in practice. However, the production/consumption of data needs to be analyzed with care: for example, if an actor B does not consume at the same speed as the actor A that produce the data, then a potentially unbounded memory (FIFO) is required between them. Many other problems can arise like deadlocks.
In many cases, this analysis will fail because the interleaving of the internal behaviors is intractable (beyond reach of today formal methods).
Despite this, dataflow programming languages remain attractive in many domains:
- for instance to define reference models for video encoding : a pure C program won't do the job because it makes the assumption that everything run as a sequence of operations, which is not true in computers (pipeline, VLIW, mutlicores, and VLSI). Maybe you could have a look at this: recent PhD thesis. CAL dataflow language is proposed as a unifying language for next generation video encoders/decoders reference.
- Mission critical where safety is required: if you add some strong assumptions on the production/consumption of data, then you get a language with strong potential in terms of code generation, proofs, etc. (see synchronous languages)