I thought that loading text files is done only from workers / within the cluster (you just need to make sure all workers have access to the same path, either by having that text
Spark can look for files both locally or on HDFS.
If you'd like to read in a file using sc.textFile() and take advantage of its RDD format, then the file should sit on HDFS. If you just want to read in a file the normal way, it is the same as you do depending on the API (Scala, Java, Python).
If you submit a local file with your driver, then addFile() distributes the file to each node and SparkFiles.get() downloads the file to a local temporary file.