Multiple Inputs with MRJob

孤者浪人 提交于 2019-12-18 11:58:21

问题


I'm trying to learn to use Yelp's Python API for MapReduce, MRJob. Their simple word counter example makes sense, but I'm curious how one would handle an application involving multiple inputs. For instance, rather than simply counting the words in a document, multiplying a vector by a matrix. I came up with this solution, which functions, but feels silly:

class MatrixVectMultiplyTast(MRJob):
    def multiply(self,key,line):
            line = map(float,line.split(" "))
            v,col = line[-1],line[:-1]

            for i in xrange(len(col)):
                    yield i,col[i]*v

    def sum(self,i,occurrences):
            yield i,sum(occurrences)

    def steps(self):
            return [self.mr (self.multiply,self.sum),]

if __name__=="__main__":
    MatrixVectMultiplyTast.run()

This code is run ./matrix.py < input.txt and the reason it works is that the matrix stored in input.txt by columns, with the corresponding vector value at the end of the line.

So, the following matrix and vector:

are represented as input.txt as:

In short, how would I go about storing the matrix and vector more naturally in separate files and passing them both into MRJob?


回答1:


If you're in need of processing your raw data against another (or same row_i, row_j) data set, you can either:

1) Create an S3 bucket to store a copy of your data. Pass the location of this copy to your task class, e.g. self.options.bucket and self.options.my_datafile_copy_location in the code below. Caveat: Unfortunately, it seems that the whole file must get "downloaded" to the task machines before getting processed. If the connections falters or takes too long to load, this job may fail. Here is some Python/MRJob code to do this.

Put this in your mapper function:

d1 = line1.split('\t', 1)
v1, col1 = d1[0], d1[1]
conn = boto.connect_s3(aws_access_key_id=<AWS_ACCESS_KEY_ID>, aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>)
bucket = conn.get_bucket(self.options.bucket)  # bucket = conn.get_bucket(MY_UNIQUE_BUCKET_NAME_AS_STRING)
data_copy = bucket.get_key(self.options.my_datafile_copy_location).get_contents_as_string().rstrip()
### CAVEAT: Needs to get the whole file before processing the rest.
for line2 in data_copy.split('\n'):
    d2 = line2.split('\t', 1)
    v2, col2 = d2[0], d2[1]
    ## Now, insert code to do any operations between v1 and v2 (or c1 and c2) here:
    yield <your output key, value pairs>
conn.close()

2) Create a SimpleDB domain, and store all of your data in there. Read here on boto and SimpleDB: http://code.google.com/p/boto/wiki/SimpleDbIntro

Your mapper code would look like this:

dline = dline.strip()
d0 = dline.split('\t', 1)
v1, c1 = d0[0], d0[1]
sdb = boto.connect_sdb(aws_access_key_id=<AWS_ACCESS_KEY>, aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>)
domain = sdb.get_domain(MY_DOMAIN_STRING_NAME)
for item in domain:
    v2, c2 = item.name, item['column']
    ## Now, insert code to do any operations between v1 and v2 (or c1 and c2) here:
    yield <your output key, value pairs>
sdb.close()

This second option may perform better if you have very large amounts of data, since it can make the requests for each row of data rather than the whole amount at once. Keep in mind that SimpleDB values can only be a maximum of 1024 characters long, so you may need to compress/decompress via some method if your data values are longer than that.




回答2:


The actual answer to your question is that mrjob does not quite yet support the hadoop streaming join pattern, which is to read the map_input_file environment variable (which exposes the map.input.file property) to determine which type of file you are dealing with based on its path and/or name.

You might still be able to pull it off, if you can easily detect from just reading the data itself which type it belongs to, as is displayed in this article:

http://allthingshadoop.com/2011/12/16/simple-hadoop-streaming-tutorial-using-joins-and-keys-with-python/

However that's not always possible...

Otherwise myjob looks fantastic and I wish they could add support for this in the future. Until then this is pretty much a deal breaker for me.




回答3:


This is how I use multiple inputs and based on filename make suitable changes in the mapper phase.

Runner Program :

from mrjob.hadoop import *


#Define all arguments

os.environ['HADOOP_HOME'] = '/opt/cloudera/parcels/CDH/lib/hadoop/'
print "HADOOP HOME is now set to : %s" % (str(os.environ.get('HADOOP_HOME')))
job_running_time = datetime.datetime.now().strftime('%Y-%m-%d_%H_%M_%S')
hadoop_bin = '/usr/bin/hadoop'
mode = 'hadoop'
hs = HadoopFilesystem([hadoop_bin])

input_file_names = ["hdfs:///app/input_file1/","hdfs:///app/input_file2/"]

aargs = ['-r',mode,'--jobconf','mapred.job.name=JobName','--jobconf','mapred.reduce.tasks=3','--no-output','--hadoop-bin',hadoop_bin]
aargs.extend(input_file_names)
aargs.extend(['-o',output_dir])
print aargs
status_file = True

mr_job = MRJob(args=aargs)
with mr_job.make_runner() as runner:
    runner.run()
os.environ['HADOOP_HOME'] = ''
print "HADOOP HOME is now set to : %s" % (str(os.environ.get('HADOOP_HOME')))

The MRJob Class :

class MR_Job(MRJob):
    DEFAULT_OUTPUT_PROTOCOL = 'repr_value'
    def mapper(self, _, line):
    """
    This function reads lines from file.
    """
    try:
        #Need to clean email.
        input_file_name = get_jobconf_value('map.input.file').split('/')[-2]
                """
                Mapper code
                """
    except Exception, e:
        print e

    def reducer(self, email_id,visitor_id__date_time):
    try:
        """
                Reducer Code
                """
    except:
        pass


if __name__ == '__main__':
    MRV_Email.run()



回答4:


In my understanding, you would not be using MrJob unless you wanted to leverage Hadoop cluster or Hadoop services from Amazon, even if the example utilizes running on local files.

MrJob in principal uses "Hadoop streaming" to submit the job.

This means that all inputs specified as files or folders from Hadoop is streamed to mapper and subsequent results to reducer. All mapper obtains a slice of input and considers all input to be schematically the same so that it uniformly parses and processes key,value for each data slice.

Deriving from this understanding, the inputs are schematically the same to the mapper. Only way possible to include two different schematic data is to interleave them in the same file in such a manner that the mapper can understand which is vector data and which is matrix data.

You are actually doing it already.

You can simply improve that by having some specifier if a line is matrix data or a vector data. Once you see a vector data then the preceding matrix data is applied to it.

matrix, 1, 2, ...
matrix, 2, 4, ...
vector, 3, 4, ...
matrix, 1, 2, ...
.....

But the process that you have mentioned works well. You have to have all schematic data in a single file.

This still has issues though. K,V map reduce works better when complete schema is present in a single line and contains a complete single processing unit.

From my understanding, you are already doing it correctly but I guess Map-Reduce is not a suitable mechanism for this kind of data. I hope some one clarifies this even further than I could.



来源:https://stackoverflow.com/questions/9302580/multiple-inputs-with-mrjob

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!