这篇文章主要为大家展示了“hadoop中mapReducez如何自定义分区”,内容简而易懂,条理清晰,希望能够帮助大家解决疑惑,下面让小编带领大家一起研究并学习一下“hadoop中mapreducez如何自定义分区”这篇文章吧。packag
这篇文章主要为大家展示了“hadoop中mapReducez如何自定义分区”,内容简而易懂,条理清晰,希望能够帮助大家解决疑惑,下面让小编带领大家一起研究并学习一下“hadoop中mapreducez如何自定义分区”这篇文章吧。
package hello_hadoop;import java.io.IOException;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.DoubleWritable;import org.apache.hadoop.io.LongWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.Mapper;import org.apache.hadoop.mapreduce.Partitioner;import org.apache.hadoop.mapreduce.Reducer;import org.apache.hadoop.mapreduce.lib.input.FileInputFORMat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;public class AutoParitionner {public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {if(args.length!=2) {System.err.println("Usage: hadoop jar xxx.jar <input path> <output path>");System.exit(1);}Configuration conf = new Configuration();Job job = Job.getInstance(conf, "avg of grades");job.setJarByClass(AutoParitionner.class);job.setMapperClass(PartitionInputClass.class);job.setReducerClass(PartitionOutputClass.class);job.setMapOutpuTKEyClass(Text.class);job.setMapOutputValueClass(DoubleWritable.class);job.setOutputKeyClass(Text.class);job.setOutputValueClass(DoubleWritable.class);//声明自定义分区的类,下面有类的声明job.setPartitionerClass(MyPartitioner.class);job.setNumReduceTasks(2);FileInputFormat.addInputPath(job, new Path(args[0]));FileOutputFormat.setOutputPath(job, new Path(args[1]));System.exit(job.waitForCompletion(true)?0:1);}}class PartitionInputClass extends Mapper<LongWritable, Text, Text, DoubleWritable>{@Overrideprotected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, DoubleWritable>.Context context)throws IOException, InterruptedException {String line = value.toString();if(line.length()>0){String[] array = line.split("\t");if(array.length==2){String name=array[0];int grade = Integer.parseInt(array[1]);context.write(new Text(name), new DoubleWritable(grade));}}}}class PartitionOutputClass extends Reducer<Text, DoubleWritable, Text, DoubleWritable>{@Overrideprotected void reduce(Text text, Iterable<DoubleWritable> iterable,Reducer<Text, DoubleWritable, Text, DoubleWritable>.Context context) throws IOException, InterruptedException {int sum = 0;int cnt= 0 ;for(DoubleWritable iw : iterable) {sum+=iw.get();cnt++;}context.write(text, new DoubleWritable(sum/cnt));}}//自定义分区的类//Partitioner<Text , DoubleWritable > Text,DoubleWirtable分别为map结果的key,valueclass MyPartitioner extends Partitioner<Text , DoubleWritable >{@Overridepublic int getPartition(Text text, DoubleWritable value, int numofreuceTask) {String name = text.toString();if(name.equals("wd")||name.equals("wzf")||name.equals("xzh")||name.equals("zz")) {return 0;}elsereturn 1;}}
以上是“hadoop中mapreducez如何自定义分区”这篇文章的所有内容,感谢各位的阅读!相信大家都有了一定的了解,希望分享的内容对大家有所帮助,如果还想学习更多知识,欢迎关注编程网精选频道!
--结束END--
本文标题: hadoop中mapreducez如何自定义分区
本文链接: https://lsjlt.com/news/231383.html(转载时请注明来源链接)
有问题或投稿请发送至: 邮箱/279061341@qq.com QQ/279061341
2024-05-24
2024-05-24
2024-05-24
2024-05-24
2024-05-24
2024-05-24
2024-05-24
2024-05-24
2024-05-24
2024-05-24
回答
回答
回答
回答
回答
回答
回答
回答
回答
回答
0