日韩精品一区二区三区高清_久久国产热这里只有精品8_天天做爽夜夜做爽_一本岛在免费一二三区

合肥生活安徽新聞合肥交通合肥房產生活服務合肥教育合肥招聘合肥旅游文化藝術合肥美食合肥地圖合肥社保合肥醫院企業服務合肥法律

CS 551代寫、c/c++設計編程代做
CS 551代寫、c/c++設計編程代做

時間:2024-11-28  來源:合肥網hfw.cc  作者:hfw.cc 我要糾錯



CS 551 Systems Programming, Fall 2024
Programming Project 2
In this project we are going to simulate the MapReduce framework on a single machine using
multi-process programming.
1 Introduction
In 2004, Google (the paper “MapReduce: Simplified Data Processing on Large Clusters” by J.
Dean and S. Ghemawat) introduced a general programming model for processing and generating
large data sets on a cluster of computers.
The general idea of the MapReduce model is to partition large data sets into multiple splits,
each of which is small enough to be processed on a single machine, called a worker. The data
splits will be processed in two phases: the map phase and the reduce phase. In the map phase, a
worker runs user-defined map functions to parse the input data (i.e., a split of data) into multiple
intermediate key/value pairs, which are saved into intermediate files. In the reduce phase, a
(reduce) worker runs reduce functions that are also provided by the user to merge the intermediate
files, and outputs the result to result file(s).
We now use a small data set (the first few lines of a famous poem by Robert Frost, see Figure
1) to explain to what MapReduce does.
Figure 1: A small data set to be processed by MapReduce.
To run MapReduce, we first split the dataset into small pieces. For this example, we will split
the dataset by the four lines of the poem (Figure 2).
Figure 2: Partitioning the input data set into multiple splits.
The MapReduce framework will have four workers (in our project, the four workers are four
processes that are forked by the main program. In reality, they will be four independent machines)
to work on the four splits (each worker is working on a split). These four map worker will each
run a user-defined map function to process the split. The map function will map the input into
a series of (key, value) pairs. For this example, let the map function simply count the number of
each letter (A-Z) in the data set.
Figure 3: The outputs of the map phase, which are also the inputs to the reduce phase.
The map outputs in our example are shown in Figure 3. They are also the inputs for the
reduce phase. In the reduce phase, a reduce worker runs a user-defined reduce function to merge
the intermediate results output by the map workers, and generates the final results (Figure 4).
Figure 4: The final result
2 Simulating the MapReduce with multi-process programming
2.1 The base code
Download the base code from the Brightspace. You will need to add your implementation into
this base code. The base code also contains three input data sets as examples.
2.2 The working scenario
In this project, we will use the MapReduce model to process large text files. The input will be a
file that contains many lines of text. The base code folder contains three example input data files.
We will be testing using the example input data files, or data files in similar format.
A driver program is used to accept user inputs and drive the MapReduce processing. The
main part of driver program is already implemented in main.c. You will need to complete the
mapreduce() function, which is defined in mapreduce.c and is called by the driver program.
A Makefile has already been given. Running the Makefile can give you the executable of the driver
program, which is named as “run-mapreduce”. The driver program is used in the following way:
./run-mapreduce "counter"|"finder" file_path split_num [word_to_find]
where the arguments are explained as follows.
• The first argument specifies the type of the task, it can be either the “Letter counter” or
the “Word conter” (explained later).
• The second argument “file path” is the path to the input data file.
• The third argument “split num” specifies how many splits the input data file should be
partitioned into for the map phase.
• The fourth argument is used only for the “Word finder” task. This argument specifies the
word that the user is trying to find in the input file.
The mapreduce() function will first partition the input file into N roughly equal-sized splits,
where N is determined by the split num argument of the driver program. Note that the sizes of
each splits do not need to be exactly the same, otherwise a word may be divided into two different
splits.
Then the mapreduce() forks one worker process per data split, and the worker process will
run the user-defined map function on the data split. After all the splits have been processed, the
first worker process forked will also need to run the user-defined reduce function to process all the
intermediate files output by the map phase. Figure 5 below gives an example about this process.
split 0
split 1
split 2
Driver
Program
map
worker 0
reduce
worker
map
worker 2
map
worker 3
“mr-0.itm”
“mr-1.itm”
“mr-2.itm”
“mr-3.itm”
map
worker 1
(1) partition
(2) fork
(3) userdefined
map
(5) userdefined
reduce
“mr.rst”
Input
data file
Intermediate
files
Result
file
PID=1001
PID=1002
PID=1003
PID=1004
PID=1001
split 3
Figure 5: An example of the working scenario.
2.3 The two tasks
The two tasks that can be performed by the driver program are described as follows.
The “Letter counter” task is similar to the example we showed in Section 1, which is counting
the number of occurrence of the 26 letters in the input file. The difference is the intermediate file
and the final result file should be written in the following format:
A number-of-occurrences
B number-of-occurrences
...
Y number-of-occurrences
Z number-of-occurrences
The “Word finder” task is to find the word provided by user (specified by the “word to find”
argument of the driver program) in the input file, and outputs to the result file all the lines that
contain the target word in the same order as they appear in the input file. For this task, you
should implement the word finder as a whole word match, meaning that the function should only
recognize complete words that match exactly(case-sensitive) with the specified search terms. And
if multiple specified words are found in the same line, you only need to output that line once.
2.4 Other requirements
• Besides the mapreduce() function defined in mapreduce.c, you will also need to complete the map/reduce functions of the two tasks (in usr functions.c.)
• About the interfaces listed in “user functions.h” and “mapreduce.h”:
– Do not change any function interfaces.
– Do not change or delete any fields in the structure interfaces (but you may add additional fields in the structure interface if necessary).
The above requirements allow the TA to test your implementations of worker logic and user
map/reduce functions separately. Note that violation to these requirements will result in 0
point for this project.
• Use fork() to spawn processes.
• Be careful to avoid fork bomb (check on Wikipedia if you are not familiar with it). A fork
bomb will result in 0 point for this project.
• The fd in the DATA SPLIT structure should be a file descriptor to the original input data
file.
• The intermediate file output by the first map worker process should be named as “mr-0.itm”,
the intermediate file by the second map worker process should be named as “mr-1.itm”, ...
The result file is named as “mr.rst” (already done in main.c).
• Program should not automatically delete the intermediate files once they are created. They
will be checked when grading. But your submission should not contain any intermediate
files as they should be created dynamically.
3 Submit your work
Compress the files: compress your README file, all the files in the base code folder, and
any additional files you add into a ZIP file. Name the ZIP file based on your BU email ID. For
example, if your BU email is “abc@binghamton.edu”, then the zip file should be “proj2 abc.zip”.
Submission: submit the ZIP file to Brightspace before the deadline.
3.1 Grading guidelines
(1) Prepare the ZIP file on a Linux machine. If your zip file cannot be uncompressed, 5 points
off.
(2) If the submitted ZIP file/source code files included in the ZIP file are not named as specified
above (so that it causes problems for TA’s automated grading scripts), 10 points off.
(3) If the submitted code does not compile:
1 TA will try to fix the problem (for no more than 3 minutes);
2 if (problem solved)
3 1%-10% points off (based on how complex the fix is, TA’s discretion);
4 else
5 TA may contact the student by email or schedule a demo to fix the problem;
6 if (problem solved)
7 11%-20% points off (based on how complex the fix is, TA’s discretion);
8 else
9 All points off;
So in the case that TA contacts you to fix a problem, please respond to TA’s email promptly
or show up at the demo appointment on time; otherwise the line 9 above will be effective.
(4) If the code is not working as required in this spec, the TA should take points based on the
assigned full points of the task and the actual problem.
(5) Lastly but not the least, stick to the collaboration policy stated in the syllabus: you may
discuss with your fellow students, but code should absolutely be kept private.

請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp




 

掃一掃在手機打開當前頁
  • 上一篇:COMP4134代做、Java程序語言代寫
  • 下一篇:代做CSC3050、代寫C/C++程序語言
  • 無相關信息
    合肥生活資訊

    合肥圖文信息
    急尋熱仿真分析?代做熱仿真服務+熱設計優化
    急尋熱仿真分析?代做熱仿真服務+熱設計優化
    出評 開團工具
    出評 開團工具
    挖掘機濾芯提升發動機性能
    挖掘機濾芯提升發動機性能
    海信羅馬假日洗衣機亮相AWE  復古美學與現代科技完美結合
    海信羅馬假日洗衣機亮相AWE 復古美學與現代
    合肥機場巴士4號線
    合肥機場巴士4號線
    合肥機場巴士3號線
    合肥機場巴士3號線
    合肥機場巴士2號線
    合肥機場巴士2號線
    合肥機場巴士1號線
    合肥機場巴士1號線
  • 短信驗證碼 酒店vi設計 deepseek 幣安下載 AI生圖 AI寫作 aippt AI生成PPT 阿里商辦

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網 版權所有
    ICP備06013414號-3 公安備 42010502001045

    日韩精品一区二区三区高清_久久国产热这里只有精品8_天天做爽夜夜做爽_一本岛在免费一二三区

      <em id="rw4ev"></em>

        <tr id="rw4ev"></tr>

        <nav id="rw4ev"></nav>
        <strike id="rw4ev"><pre id="rw4ev"></pre></strike>
        亚洲欧美日韩一区在线观看| 亚洲高清一区二区三区| 欧美一激情一区二区三区| 亚洲先锋成人| 亚洲精品激情| 夜夜精品视频一区二区| 欧美片在线播放| 亚洲国产精品成人| 在线播放豆国产99亚洲| 农夫在线精品视频免费观看| 韩国精品一区二区三区| 午夜精品久久久久久久久久久久久| 国产精品久久久久9999吃药| 久久久国产成人精品| 一区二区三区在线不卡| 久久天堂av综合合色| 亚洲一区激情| 久久爱www.| 欧美一区二区观看视频| 免费在线成人av| 国产欧美精品日韩| 在线综合亚洲欧美在线视频| 狠狠久久婷婷| 亚洲一区在线观看视频| 亚洲天堂av在线免费| 欧美日韩精品一二三区| 136国产福利精品导航网址| 亚洲三级免费| 久久噜噜噜精品国产亚洲综合| 久久国产欧美| 一本色道久久综合精品竹菊| 亚洲第一网站免费视频| 国产视频一区二区三区在线观看| 欧美日韩亚洲视频一区| 亚洲私人影院在线观看| 国产日韩在线播放| 一区二区欧美精品| 99热这里只有成人精品国产| 亚洲综合国产激情另类一区| 最新亚洲激情| 国产精品久久久久久久电影| 国产精品视频在线观看| 亚洲视频免费看| 久久人人九九| 亚洲精品欧美专区| 亚洲激情六月丁香| 国产精品久久久久久久久久三级| 性做久久久久久免费观看欧美| 亚洲精品国产无天堂网2021| 久久精品99久久香蕉国产色戒| 亚洲欧美国产制服动漫| 亚洲高清在线播放| 亚洲国产精品传媒在线观看| 欧美久久久久久久久| 国产精品视频99| 欧美日韩免费在线视频| 久久av红桃一区二区小说| 欧美日韩午夜在线| 亚洲经典视频在线观看| 在线亚洲国产精品网站| 亚洲永久视频| 精品成人在线观看| 久久久综合网站| 亚洲欧洲av一区二区| 亚洲电影免费观看高清完整版在线观看| 国产一区二区三区精品欧美日韩一区二区三区| 亚洲影院一区| 国产亚洲精品v| 国产精品裸体一区二区三区| 99国内精品| 欧美a一区二区| 夜色激情一区二区| 欧美激情一区二区三区不卡| 欧美xart系列高清| 欧美亚洲一区三区| 欧美日韩在线精品一区二区三区| 久久婷婷久久一区二区三区| 欧美片第1页综合| 亚洲国产片色| 国产精品日韩在线一区| 欧美视频手机在线| 国内精品久久久久国产盗摄免费观看完整版| 先锋影音一区二区三区| 亚洲一区久久| 国产亚洲欧美一区在线观看| 国产美女诱惑一区二区| 久久久精品欧美丰满| 日韩一二在线观看| 99在线精品观看| 能在线观看的日韩av| 激情婷婷欧美| 欧美国产精品专区| 欧美日韩亚洲三区| 久久一区二区三区国产精品| 好吊日精品视频| 久久久精品国产免大香伊| 国产日韩精品一区二区| 国产毛片一区二区| 国产精品久久久久一区二区三区| 欧美日韩国产一区二区三区| 亚洲激情小视频| 欧美日韩久久不卡| 久久一区二区三区四区五区| 国产自产高清不卡| 国产亚洲免费的视频看| 国产精品欧美久久| 欧美亚洲在线观看| 米奇777在线欧美播放| 欧美日本免费一区二区三区| 久久天堂av综合合色| 欧美一区二区三区免费看| 久久久久久久999| 亚洲欧美日本视频在线观看| 久久久999精品免费| 在线观看91精品国产入口| 久久久久9999亚洲精品| 夜夜狂射影院欧美极品| 国产精品爱啪在线线免费观看| 欧美日韩精品在线| 久久久www成人免费毛片麻豆| 亚洲国产欧美在线| 欧美日韩一区在线观看视频| 亚洲中无吗在线| 国产精品乱码一区二区三区| 六十路精品视频| 亚洲电影视频在线| 欧美日韩国产探花| 亚洲一区二区三区中文字幕在线| 国产精品劲爆视频| 亚洲视频在线观看网站| 黄色精品网站| 国产精品午夜av在线| 1000精品久久久久久久久| 一区二区三区国产盗摄| 亚洲第一主播视频| 久久综合九色| 国产一区日韩一区| 麻豆国产精品777777在线| 亚洲欧美日韩国产综合在线| 国产日韩欧美另类| 亚洲国产精品一区二区第四页av| 久久午夜激情| 国一区二区在线观看| 亚洲国产毛片完整版| 亚洲一区二区少妇| 亚洲人成网站777色婷婷| 国产精品久久久久三级| 亚洲一区二区在线观看视频| 在线日韩日本国产亚洲| 亚洲大胆人体在线| 欧美三级日本三级少妇99| 久久九九热免费视频| 亚洲欧美精品suv| 久久激情五月激情| 久久国产88| 亚洲人线精品午夜| 午夜精品久久久久久久蜜桃app| 日韩亚洲欧美高清| 欧美激情久久久| 国产伦精品一区二区三区四区免费| 六月婷婷久久| 亚洲综合视频一区| 亚洲第一狼人社区| 国产精品久久久久高潮|