summaryrefslogtreecommitdiff
path: root/report/report.tex
blob: 581d2ba60176907150f6d9e1bc535d6f484762b7 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
\documentclass[12pt]{article}
\usepackage{geometry}
\usepackage{titling}
\usepackage{helvet}
\usepackage{tgpagella} % text only


\geometry{
  a4paper,
  lmargin=1in,
  rmargin=1in,
  tmargin=1in,
  bmargin=1in,
}
\setlength{\droptitle}{-3em}   % This is your set screw




\title{\fontfamily{phv}\selectfont
Analyzing Performance of Booth’s Algorithm and Modified Booth’s Algorithm}
\author{Brett Weiland}

\begin{document}
\maketitle
\begin{abstract}
In this paper, the performance of Booth’s Algorithm is compared to modified Booth's Algorithm. Each multiplier is simulated in Python, and performance is observed by counting the number of add and subtract operations for various inputs. Results are analyzed and discussed to highlight the tradeoffs between the two multipliers.
\end{abstract}
\section*{Introduction}
Multiplication is among the most time consuming mathematical operations for processors. In many applications, the time it takes to multiply dramatically influences the speed of the program. Applications of digital signal processing (such as audio modification and image processing) require constant multiply and accumulate operations for functions such as fast fourier transformations and convolutions. Other applications are heavily dependent on multiplying large matrices, such as machine learning, 3D graphics and data analysis. In such scenarios, the speed of multiplication is vital. Consequently, most modern processors implement hardware multiplication. However, not all multiplication circuits are equal; there is often a stark contrast between performance and hardware complexity. To further complicate things, multiplication circuits perform differently depending on what numbers are being multiplied.
\section*{Implementation}

\section*{Results}
\section*{Analysis}
\section*{Conclusion}
\end{document}