日韩精品一区二区三区高清_久久国产热这里只有精品8_天天做爽夜夜做爽_一本岛在免费一二三区

合肥生活安徽新聞合肥交通合肥房產生活服務合肥教育合肥招聘合肥旅游文化藝術合肥美食合肥地圖合肥社保合肥醫院企業服務合肥法律

ECE1747H代做、代寫python,Java程序

時間:2023-11-27  來源:合肥網hfw.cc  作者:hfw.cc 我要糾錯



 Assignment 2: Parallelize What Seems Inherently Sequential: ECE17**H F LEC0101 20239:Parallel Programming

Assignment 2: Parallelize What Seems
Inherently Sequential
Introduction

In parallel computing, there are operations that, at first glance, seem inherently sequential but can
be transformed and executed efficiently in parallel. One such operation is the "scan". At its
essence, the scan operation processes an array to produce a new array where each element is
the result of a binary associative operation applied to all preceding elements in the original array.
Consider an array of numbers, and envision producing a new array where each element is the
sum of all previous numbers in the original array. This type of scan that uses "+" as the binary
operator is commonly known as a "prefix-sum".  Scan has two primary variants: exclusive and
inclusive. In an exclusive scan, the result at each position excludes the current element, while in
an inclusive scan, it includes the current element. For instance, given an array [3, 1, 7, 0] and
an addition operation, an exclusive scan would produce [0, 3, 4, 11] , and an inclusive scan
would produce [3, 4, 11, 11] . 
Scan operations are foundational in parallel algorithms, with applications spanning from sorting to
stream compaction, building histograms and even more advanced tasks like constructing data
structures in parallel. In this assignment, we'll delve deep into the intricacies of scan, exploring its
efficient implementation using CUDA.

Assignment Description

In this assignment, you will implement a parallel scan using CUDA. Let's further assume that the
scan is inclusive and the operator involved in the scan is addition. In other words, you will be
implementing an inclusive prefix sum.
The following is a sequential version of inclusive prefix sum:

void sequential_scan(int *x, int *y, unsigned int N) {
  y[0] = x[0];
  for(unsigned int i = 1; i < N; ++i) {
    y[i] = y[i - 1] + x[i];
  }
}

While this might seem like a task demanding sequential processing, with the right algorithm, it can
be efficiently parallelized. Your parallel implementation will be compared against the sequential
 Assignment 2: Parallelize What Seems Inherently Sequential: ECE17**H F LEC0101 20239:Parallel Programming
 2/8

version which runs on the CPU. The mark will be based on the speedup achieved by your
implementation. Note that data transfer time is not included in this assignment. However, in real
world applications, data transfer in often a bottleneck and is important to include that in the
speedup calculation.

Potential Algorithms

 In this section, I describe a few algorithms to implement a parallel scan on GPU, which you may
use for this assignment. Of course, you may also choose to use other algorithms. These
algorithms are chosen for their simplicity and may not be the fastest.
We will first present algorithms for performing parallel segmented scan, in which every thread
block will perform a scan on a segment of elements in the input array in parallel. We will then
present methods that combine the segmented scan results into the scan output for the entire input
array.

Segmented Scan Algorithms

The exploration of parallel solutions for scan problems has a long history, spanning several
decades. Interestingly, this research began even before the formal establishment of Computer
Science as a discipline. Scan circuits, crucial to the operation of high-speed adder hardware like
carry-skip adders, carry-select adders, and carry-lookahead adders, stand as evidence of this
pioneering research.
As we know, the fastest parallel method to compute the sum of a set of values is through a
reduction tree. Given enough execution units, this tree can compute the sum of N values in
log2(N) time units. Additionally, the tree can produce intermediate sums, which can be used to
produce the scan (prefix sum) output values. This principle is the foundation of the design of both
the Kogge-Stone and Brent-Kung adders.

Brent-Kung Algorithm
 Assignment 2: Parallelize What Seems Inherently Sequential: ECE17**H F LEC0101 20239:Parallel Programming
 3/8

The above figure show the steps for a parallel inclusive prefix sum algorithm based on the BrentKung
 adder design. The top half of the figure produces the sum of all 16 values in 4 steps. This is
exactly how a reduction tree works. The second part of the algorithm (bottom half of the figure) is
to use a reverse tree to distribute the partial sums and use them to complete the result of those
positions. 

Kogge-Stone Algorithm

The Kogge-Stone algorithm is a well-known, minimum-depth network that uses a recursivedoubling
 approach for aggregating partial reductions. The above figure shows an in-place scan
 Assignment 2: Parallelize What Seems Inherently Sequential: ECE17**H F LEC0101 20239:Parallel Programming
 4/8

algorithm that operates on an array X that originally contains input values. It iteratively evolves the
contents of the array into output elements. 
In the first iteration, each position other than X[0] receives the sum of its current content and that
of its left neighbor. This is illustrated by the first row of addition operators in the figure. As a result,
X[i] contains xi-1 +xi. In the second iteration, each position other than X[0] and X[1] receives the
sum of its current content and that of the position that is two elements away (see the second row
of adders). After k iterations, X[i] will contain the sum of up to 2^k input elements at and before the
location. 
Although it has a work complexity of O(nlogn), its shallow depth and simple shared memory
address calculations make it a favorable approach for SIMD (SIMT) setups, like GPU warps.

Scan for Arbitrary-length Inputs

For many applications, the number of elements to be processed by a scan operation can be in the
millions or even billions. The algorithms that we have presented so far perform local scans on
input segments. Therefore, we still need a way to consolidate the results from different sections.

Hierarchical Scan

One of such consolidation approaches is hierarchical scan. For a large dataset we first partition
the input into sections so that each of them can fit into the shared memory of a streaming
multiprocessor (GPU) and be processed by a single block. The aforementioned algorithms can be
used to perform scan on each partition. At the end of the grid execution, the Y array will contain
the scan results for individual sections, called scan blocks (see the above figure). The second
step gathers the last result elements from each scan block into an array S and performs a scan on
these output elements. In the last step of the hierarchical scan algorithm, the intermediate result in
S will be added to the corresponding elements in Y to form the final result of the scan.
For those who are familiar with computer arithmetic circuits, you may already recognize that the
principle behind the hierarchical scan algorithm is quite similar to that of carry look-ahead adders
 Assignment 2: Parallelize What Seems Inherently Sequential: ECE17**H F LEC0101 20239:Parallel Programming
 5/8

in modern processor hardwares.

Single Pass Scan

One issue with hierarchical scan is that the partially scanned results are stored into global
memory after step 1 and reloaded from global memory before step 3. The memory access is not
overlapped with computation and can significantly affect the performance of the scan
implementation (as shown in the above figure).
There exists many techniques proposed to mitigate this issue. Single-pass chained scan (also
called stream-based scan or domino-style scan) passes the partial sum data in one directory
across adjacent blocks. Chained-scan is based on a key observation that the global scan step
(step 2 in hierarchical scan) can be performed in a domino fashion (i.e. from left to right, and the
output can be immediately used). As a result, the global scan step does not require a global
synchronization after it, since each segment only needs the partial sum of segments before itself.

Further Reading

Parallel Prefix Sum (Scan) with CUDA


Single-pass
 Parallel Prefix Scan with Decoupled Look-back


Report
 Assignment 2: Parallelize What Seems Inherently Sequential: ECE17**H F LEC0101 20239:Parallel Programming


Along with your code, you will also need to submit a report. Your report should describe the
following aspects in detail:
Describe what algorithm did you choose and why.
Describe any design decisions you made and why. Explain how they might affect performance.
Describe anything you tried (even they are not in the final implementation) and if they worked
or not. Why or why not.
Analyze the bottleneck of your current implementation and what are the potential
optimizations.
Use font Times New Roman, size 10, single spaced. The length of the report should not exceed 3
pages.

Setup

Initial Setup

Start by unzipping the provided starter code a2.zip

 into a protected directory within your
UG home directory. There are a multiple files in the provided zip file, the only file you will need
to modify and hand in is implementation.cu. You are not allowed to modify other files as only
your implementation.cu file will be tested for marking.
Within implementations.cu, you need to insert your identification information in the
print_team_info() function. This information is used for marking, so do it right away before you
start the assignment.

Compilation

The assignment uses GNU Make to compile the source code. Run make in the assignment
directory to compile the project, and the executable named ece17**a2 should appear in the same
directory.

Coding Rules

The coding rule is very simple.
You must not use any existing GPU parallel programming library such as thrust and cub. 
You may implement any algorithm you want.
Your implementation must use CUDA C++ and compilable using the provided Makefile. 
You must not interfere or attempt to alter the time measurement mechanism.
Your implementation must be properly synchronized so that all operations must be finished
before your implementation returns.

Evaluation
 Assignment 2: Parallelize What Seems Inherently Sequential: ECE17**H F LEC0101 20239:Parallel Programming
 7/8

The assignment will be evaluated on an UG machine equipped with Nvidia GPU. Therefore, make
sure to test your implementation on the UG machines before submission. When you evaluate your
implementation using the command below, you should receive similar output.

ece17**a2 -g
************************************************************************************
Submission Information:
nick_name: default-name
student_first_name: john
student_last_name: doe
student_student_number: 0000000000
************************************************************************************
Performance Results:
Time consumed by the sequential implementation: 124374us
Time consumed by your implementation: 1250**us
Optimization Speedup Ratio (nearest integer): 1
************************************************************************************

Marking Scheme

The total available marks for the assignment are divided as follows: 20% for the lab report, 65%
for the non-competitive portion, and 15% for the competitive portion. The non-competitive section
is designed to allow individuals who put in minimal effort to pass the course, while the competitive
section aims to reward those who demonstrate higher merit.

Non-competitive Portion (65%)

Achieving full marks in the non-competitive portion should be straightforward for anyone who puts
in the minimal acceptable amount of effort. You will be awarded full marks in this section if your
implementation achieves a threshold speedup of 30x. Based on submissions during the
assignment, the TA reserves the right to adjust this threshold as deemed appropriate, providing at
least one week's notice.

Competitive Portion (15%)

Marks in this section will be determined based on the speedup of your implementation relative to
the best and worst speedups in the class. The formula for this is:

mark = (your speedup - worst speedup over threshold) / (top speedup - worst speedup over threshold)

Throughout the assignment, updates on competitive marks will be posted on Piazza at intervals
not exceeding 24 hours.
 The speedup will be measure on a standard UG machine equipped with GPU. (Therefore, make
sure to test your implementations on the UG machines). The final marking will be performed after
the submission deadline on all valid submissions.

Submission

Submit your report on Quercus. Make sure your report is in pdf format and can be viewed with
standard pdf viewer  (e.g. xpdf or acroread).
 Assignment 2: Parallelize What Seems Inherently Sequential: ECE17**H F LEC0101 20239:Parallel Programming
 8/8

When you have completed the lab, you will hand in just implementation.cu that contains your
solution. The standard procedure to submit your assignment is by typing submitece17**f 2
implementation.cu on one of the UG machines.
Make sure you have included your identifying information in the print team info() function.
Remove any extraneous print statements.

請加QQ:99515681 或郵箱:99515681@qq.com   WX:codehelp

掃一掃在手機打開當前頁
  • 上一篇:&#160;代做EEE226、java,c++編程代寫
  • 下一篇:代寫CSC3100 Data Structures
  • 無相關信息
    合肥生活資訊

    合肥圖文信息
    2025年10月份更新拼多多改銷助手小象助手多多出評軟件
    2025年10月份更新拼多多改銷助手小象助手多
    有限元分析 CAE仿真分析服務-企業/產品研發/客戶要求/設計優化
    有限元分析 CAE仿真分析服務-企業/產品研發
    急尋熱仿真分析?代做熱仿真服務+熱設計優化
    急尋熱仿真分析?代做熱仿真服務+熱設計優化
    出評 開團工具
    出評 開團工具
    挖掘機濾芯提升發動機性能
    挖掘機濾芯提升發動機性能
    海信羅馬假日洗衣機亮相AWE  復古美學與現代科技完美結合
    海信羅馬假日洗衣機亮相AWE 復古美學與現代
    合肥機場巴士4號線
    合肥機場巴士4號線
    合肥機場巴士3號線
    合肥機場巴士3號線
  • 短信驗證碼 trae 豆包網頁版入口 目錄網 排行網

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網 版權所有
    ICP備06013414號-3 公安備 42010502001045

    日韩精品一区二区三区高清_久久国产热这里只有精品8_天天做爽夜夜做爽_一本岛在免费一二三区

      <em id="rw4ev"></em>

        <tr id="rw4ev"></tr>

        <nav id="rw4ev"></nav>
        <strike id="rw4ev"><pre id="rw4ev"></pre></strike>
        亚洲特级片在线| 好吊成人免视频| 午夜精彩国产免费不卡不顿大片| 国产亚洲成av人片在线观看桃| 欧美日韩国产综合视频在线观看中文| 国产裸体写真av一区二区| 国产嫩草影院久久久久| 国产视频亚洲| 国产精品视频一区二区三区| 一区二区三区视频在线观看| 国产亚洲成人一区| 亚洲精品久久嫩草网站秘色| 美女黄毛**国产精品啪啪| 国产精品午夜国产小视频| 亚洲视频国产视频| 日韩亚洲精品在线| 国产精品日韩欧美一区二区三区| 一个色综合导航| 亚欧成人在线| 欧美成人在线影院| 亚洲永久免费av| 国产农村妇女毛片精品久久麻豆| 亚洲视频欧美在线| 日韩一区二区免费高清| 国产精品中文字幕欧美| 久久先锋影音av| 欧美精品在线免费| 国产精品久久久久9999吃药| 韩国在线一区| 亚洲午夜精品一区二区| 亚洲视频电影在线| 美女网站久久| 欧美在线精品免播放器视频| 小嫩嫩精品导航| 欧美网站在线观看| 一区二区在线免费观看| 午夜一区二区三区在线观看| 亚洲日本电影在线| 免费观看一级特黄欧美大片| 国产日韩一区欧美| 欧美三区免费完整视频在线观看| 久久国产免费看| 欧美特黄a级高清免费大片a级| 欧美一区视频| 欧美成人国产| 中文国产亚洲喷潮| 亚洲国产另类精品专区| 国产一区二区三区四区五区美女| 国产在线欧美日韩| 欧美精品一区二区三区一线天视频| 国产精品久久二区| 免费视频一区| 99在线视频精品| 欧美一区二区精品久久911| 欧美激情免费观看| 国产日韩欧美综合| 欧美精品久久久久久| 国产在线精品一区二区中文| 欧美人与性动交a欧美精品| 欧美一区二视频在线免费观看| 欧美国产日韩在线| 欧美日韩国产综合视频在线| 国产精品久久久久久av福利软件| 国产欧美亚洲视频| 亚洲一区二区网站| 亚洲裸体俱乐部裸体舞表演av| 欧美一区二区三区男人的天堂| 国产欧美一区二区精品仙草咪| 久久婷婷国产综合国色天香| 亚洲精品一区二区三区福利| 136国产福利精品导航网址| 国产精品欧美久久久久无广告| 今天的高清视频免费播放成人| 亚洲经典视频在线观看| 国产日韩精品久久久| 国产精品久久夜| 欧美体内she精视频在线观看| 欧美美女喷水视频| 另类尿喷潮videofree| 欧美精品一区二区三区四区| 国产精品午夜av在线| 国内久久婷婷综合| 欧美日韩精品在线观看| 国产在线视频欧美一区二区三区| 日韩视频欧美视频| 久久久精彩视频| 国产精品v欧美精品v日韩精品| 免费在线观看精品| 亚洲综合久久久久| 久久夜色精品国产欧美乱极品| 欧美日韩国产一区| 国产精品美女久久| 欧美一区2区视频在线观看| 最新国产の精品合集bt伙计| 欧美日韩一级黄| 久久久久成人精品| 欧美激情区在线播放| 91久久国产精品91久久性色| 欧美精品激情blacked18| 国内视频精品| 国产在线播精品第三| 免费日韩一区二区| 另类人畜视频在线| 欧美午夜精品| 亚洲美女一区| 欧美日韩国产三区| 欧美日韩在线视频一区| 欧美日韩国产经典色站一区二区三区| 欧美一级午夜免费电影| 国产精品视频1区| 亚洲精品国产视频| 国产一区二区日韩精品欧美精品| 国产精品你懂的| 欧美三级特黄| 精品福利电影| 欧美性猛交99久久久久99按摩| 欧美日韩在线播放一区二区| 久久久91精品国产一区二区三区| 亚洲影院免费| aa亚洲婷婷| 亚洲欧洲美洲综合色网| 91久久精品一区| 在线中文字幕不卡| 欧美日本一道本| 久久视频在线免费观看| 国产日韩欧美不卡| 亚洲欧美精品中文字幕在线| 欧美成人午夜影院| 亚洲国产精品久久91精品| 日韩一区二区久久| 欧美专区在线观看| 欧美午夜宅男影院在线观看| 中国成人亚色综合网站| 国产精品久久久久久久久免费| 性刺激综合网| 91久久国产综合久久91精品网站| 欧美午夜宅男影院在线观看| 日韩视频免费观看高清在线视频| 中文久久精品| 99精品热6080yy久久| 国产无一区二区| 国产资源精品在线观看| 欧美88av| 午夜精品久久久久| 国产精品久久久久久久9999| 亚洲丰满少妇videoshd| 红杏aⅴ成人免费视频| 国产自产精品| 亚洲国产高清高潮精品美女| 久久在线免费观看视频| 亚洲视频观看| 99在线精品视频| 久久www成人_看片免费不卡| 亚洲午夜av| 99精品国产福利在线观看免费| 久久久国产视频91| 亚洲黄色高清| 野花国产精品入口| 国产精品99久久不卡二区| 国产精品国产三级国产| 欧美在线观看一区二区三区| 激情小说亚洲一区| 亚洲欧美日韩视频一区| 亚洲午夜成aⅴ人片|