Friday, October 10, 2008

My Report


SPARC (from Scalable Processor Architecture)
is a RISC microprocessor instruction set architecture originally designed in 1985 by Sun Microsystems.
SPARC is a registered trademark of SPARC International, Inc., an organization established in 1989 to promote the SPARC architecture and to provide conformance testing. SPARC International was intended to open the SPARC architecture to make a larger ecosystem for the design, which has been licensed to several manufacturers, including Texas Instruments, Atmel, Cypress Semiconductor, and Fujitsu. As a result of SPARC International, the SPARC architecture is fully open and non-proprietary.
Implementations of the SPARC architecture were initially designed and used for Sun's Sun-4 workstation and server systems, replacing their earlier Sun-3 systems based on the Motorola 68000 family of processors. Later, SPARC processors were used in SMP servers produced by Sun Microsystems, Solbourne and Fujitsu, among others.
The SPARC architecture was heavily influenced by the earlier RISC designs including the RISC I & II from the University of California, Berkeley and the IBM 801. These original RISC designs were minimalist, including as few features or op-codes as possible and aiming to execute instructions at a rate of almost one instruction per clock cycle. This made them similar to the MIPS architecture in many ways, including the lack of instructions such as multiply or divide. Another feature of SPARC influenced by this early RISC movement is the branch delay slot.
The SPARC processor usually contains as many as 128 general purpose registers. At any point, only 32 of them are immediately visible to software - 8 are global registers (one of which, g0, is hard-wired to zero, so only 7 of them are usable as registers) and the other 24 are from the stack of registers. These 24 registers form what is called a register window, and at function call/return, this window is moved up and down the register stack. Each window has 8 local registers and shares 8 registers with each of the adjacent windows.
INVENTORS
Sgro, Joseph A.
Stanton, Paul C.
HISTORY
There have been three major revisions of the architecture. The first published revision was the 32-bit SPARC Version 7 (V7) in 1986. SPARC Version 8 (V8), an enhanced SPARC architecture definition, was released in 1990. SPARC V8 was standardized as IEEE 1754-1994, an IEEE standard for a 32-bit microprocessor architecture. SPARC Version 9, the 64-bit SPARC architecture, was released by SPARC International in 1993. In early 2006, Sun released an extended architecture specification, UltraSPARC Architecture 2005. UltraSPARC Architecture 2005 includes not only the nonprivileged and most of the privileged portions of SPARC V9, but also all the architectural extensions (such as CMT, hyperprivileged, VIS 1, and VIS 2) present in Sun's UltraSPARC processors starting with the UltraSPARC T1 implementation. UltraSPARC Architecture 2005 includes Sun's standard extensions and remains compliant with the full SPARC V9 Level 1 specification. The architecture has provided continuous application binary compatibility from the first SPARC V7 implementation in 1987 into the Sun UltraSPARC Architecture implementations.
REFERENCES
1. ^ Various SPARC V7 implementations were produced by Fujitsu, LSI Logic, Weitek, Texas Instruments and Cypress. A SPARC V7 processor generally consisted of several discrete chips, usually comprising an Integer Unit (IU), a Floating-Point Unit (FPU), a Memory Management Unit (MMU) and cache memory.
2. ^ "FX1 Key Features & Specifications". Fujitsu (2008-02-19).
3. ^ "A Third-Generation 65nm 16-Core 32-Thread Plus 32-Scout-Thread CMT SPARC(R) Processor". Sun Microsystems (2008-02-19).
4. ^ "Intergraph Announces Port of Windows NT to SPARC Architecture". The Florida SunFlash (1993-07-07).

Thursday, October 9, 2008

Final Question

DIGITAL CLOCK

#include
#include
#include
#include
#include
#define PI 3.14
void getTime(int *h, int *m, int *s ){struct time t;gettime(&t);gotoxy(36,18);printf("%2d:%02d:%02d.%02d\n",t.ti_hour,t.ti_min,t.ti_sec,t.ti_hund);*h = t.ti_hour;*m = t.ti_min;*s = t.ti_sec;}
void main(){int gd=DETECT, gm;initgraph(&gd, &gm, "
\\tc");int xs, ys, xm, ym, xh, yh, h, m, s;while(!kbhit()){cleardevice();getTime(&h, &m, &s);settextstyle(1,0,0);setcolor(WHITE);outtextxy(300,15,"12");outtextxy(315,425,"6");outtextxy(105,220,"9");outtextxy(520,220,"3");settextstyle(5,0,0);setcolor(GREEN);outtextxy(275,300,"CLOCK");settextstyle(2 ,0,0);setcolor(LIGHTRED);outtextxy(310,295,"Mukesh");xh = cos((h*30 + m / 2) * PI / 180 - PI / 2) * 150 + 320;yh = sin((h*30 + m / 2) * PI / 180 - PI / 2) * 150 + 240;xm = cos(m * PI / 30 - PI / 2) * 180 + 320;ym = sin(m * PI / 30 - PI / 2) * 180 + 240;xs = cos(s * PI / 30 - PI / 2) * 210 + 320;ys = sin(s * PI / 30 - PI / 2) * 210 + 240;setcolor(LIGHTBLUE);circle(320,240,220);setcolor(LIGHTRED);line(320,240,xh,yh);setcolor(LIGHTGREEN);line(320,240,xm,ym);setcolor(YELLOW);line(320,240,xs,ys);sleep(1);}}

Tuesday, October 7, 2008

Question # 5

Research in the net the most recent assembler. Describe its history, nature and applications. Evaluate this assembler from its predecessor.

U-Boot is a boot loader for Embedded boards based on PowerPC, ARM, MIPS and several other processors, which can be installed in a boot ROM and used to initialize and test the hardware or to download and run application code.

The development of U-Boot is closely related to Linux: some parts of the source code originate in the Linux source tree, we have some header files in common, and special provision has been made to support booting of Linux images. Some attention has been paid to make this software easily configurable and extendable. For instance, all monitor commands are implemented with the same call interface, so that it’s very easy to add new commands.

Also, instead of permanently adding rarely used code (for instance hardware test utilities) to the monitor, you can load and run it dynamically.

REFERENCE: u-boot.sourceforge.net

Wednesday, October 1, 2008

Question #4

Justify what situations or applications programmers will rather use Assembly Languages than Higher Level Progamming Languages and vice versa.

In the old days, it was pretty easy to understand that writing your programs in assembly would tend to yield higher performing results than writing in higher level languages. Compilers had solved the problem of "optimization" in too general and simplistic a way, and had no hope of competing with the sly human assembly coder. These days the story is slightly different. Compilers have gotten better and the CPUs have gotten harder to optimize for. Inside some research organizations the general consensus is that compilers could do at least as well as humans in almost all cases. During a presentation I gave to some folks at AT&T Bell labs (a former employer) I explained that I was going to implement a certain piece of software in assembly language, which raised eyebrows. One person went so far as to stop me and suggest a good C/C++ compiler that would do a very good job of generating optimized object code and make my life a lot easier. But have compilers really gotten so good that humans cannot compete? I offer the following facts: High level languages like C and C++ treat the host CPU in a very generic manner. While local optimizations such as loop unrolling, and register resource contention are easy for compilers to deal with, odd features like 32 byte cache lines, 8K data/code cache totals, multiple execution units, and burst device memory interfaces are something not easily expressed or exploited by a C/C++ compiler. On a Pentium, it is ordinarily beneficial to declare your data so that its usage on inner loops retains as much as possible in the cache for as long as possible. This can require bizarre declaration requirements which are most easily dealt with by using unions of 8K structures for all data used in your inner loops. This way you can overlap data with poor cache coherency together, while using as much of the remainder of the cache for data with good cache coherency. The pentium also has an auxiliary floating point execution unit which can actually perform floating point operations concurrently with integer computations. This can lead to algorithmic designs which require an odd arrangement of your code, that has no sensible correspondence with high level code that computes the same thing. Basically, on the pentium, C/C++ compilers have no easy way to translate source code to cache structured aware data and code along with concurrently running floating point. The MMX generation of x86 processors will pose even greater problems. Nevertheless I explained to the folks at Bell Labs that I owned the compiler that they suggested, and that when it came to optimizations, I could (can) easily code circles around it. The classic example of overlooking these points above is that of one magnanimous programmer who came from a large company and declared to the world, through USENET, that he was going to write a "100% optimal 3D graphics library completely in C++". He emphatically defended his approach with long flaming postings insisting that modern C++ compilers would be able to duplicate any hand rolled assembly language optimization trick. He got most of the way through before abandoning his project. He eventually realized that the only viable solution for existing PC platforms is to exploit the potential for pipelining the FPU and the integer CPU instructions in a maximally concurrent way. Something no x86 based C compiler in production today is capable of doing. I always roll my eyes when the old debate of whether or not the days of hand rolled assembly language are over resurfaces in the USENET assembly language newsgroups. On the other hand perhaps I should be thankful since these false beliefs about the abilities of C/C++ compilers in other programmers only, by comparison, differentiates my abilities more clearly to my employer. The conclusion you should take away from this (and my other related web pages) is that when pedal to the metal kinds of performance is required, that there is a significant performance margin to be gained in using assembly language. Ordinarily, one combines C/C++ and assembly by using the compiler's inline assembly feature, or by linking to a library of hand rolled assembly routines.

Question # 3

Question: Research in the net what is the best assembler and why?

(NASM) is assembler and disassembler for the Intel x86 architecture. It can be used to write 16-bit, 32-bit (IA-32) and 64-bit (x86-64) programs. NASM is considered to be one of the most popular assemblers for Linux and is the second most popular assembler overall.NASM was originally written by Simon Tatham with assistance from Julian Hall, and is currently maintained by a small team led by H. Peter Anvin. It was originally copyrighted but it is now available as free software under the terms of the GNU Lesser General Public License. NASM can output several binary formats including COFF, Portable Executable, a.out, ELF and Mach-O, though Position-independent code is only supported for ELF object files. NASM also has its own binary format called RDOFF. 32-bit programs can be written using NASM in such a way that they are portable to all 32-bit x86 operating systems, if the right libraries are used. The variety of output formats allows one to retarget programs to virtually any x86 operating system. In addition, NASM can create flat binary files, usable in writing boot loaders, ROM images, and various facets of OS development.NASM can run on non-x86 platforms, such as SPARC and PowerPC, though it cannot output programs usable by those machines. NASM uses Intel assembly syntax instead of AT&T syntax. It also avoids features such as automatic generation of segment overrides (and the related ASSUME directive) used by MASM and compatible assemblers.

References


Ram Narayan. "Linux assemblers: A comparison of GAS and NASM". "two of the most popular assemblers for Linux®, GNU Assembler (GAS) and Netwide Assembler (NASM)" Randall Hyde. "Which Assembler is the Best?". Retrieved on 2008-05-18. "In second place, undoubtedly, is the NASM assembler." a b "The Netwide Assembler". Retrieved on 2008-06-27. a b Randall Hyde. "NASM: The Netwide Assembler". Retrieved on 2008-06-27. a b c d "NASM Manual". Retrieved on 2008-06-27.

Sunday, September 28, 2008

Question # 2

Research in the net usual applications done in assembly language. Describe these applications briefly and cite the efficiency and effectiveness of these applications.
Launch Java Applications from Assembly Language Programs

JavaNative Interface (JNI) is a mechanism that can be used toestablish communication between native language programs and theJava virtual machine. The documentation for JNI and the technicalliterature on JNI deal extensively with interactions between theJVM and C/C++ code. The Java SDK even provides a utility togenerate a header file to facilitate calling C/C++ programs fromJava code. However, there is hardly any mention of Java andassembly language code working together. In an earlier article I showed how assembly language programs can becalled from Java applications. Here I deal with the technique forinvoking Java programs from an ASM process through a demoapplication that calls a Java method from assembly language code.The Java method brings up a Swing JDialog to show thatit has, indeed, been launched.
Why Java with ASM?
JNI is essential to the implementation of Java, since the JVMneeds to interact with the native platform to implement some of itsfunctionality. Apart from that, however, use of Java classes canoften be an attractive supplement to applications written in otherlanguages, as Java offers a wide selection of APIs that makesimplementation of advanced functions very simple.Some time ago, I was associated with an application to collectreal-time data from a number of sources and save them in circularbuffers so that new data would overwrite old data once the buffergot filled up. If a designated trigger event was sensed through adigital input, a fixed number of data samples would be saved in thebuffers so that a snapshot of pre- and post-trigger data would beavailable. The original application was written in assemblylanguage. After the application was used for a few months, it wasfelt that it would be very useful to have the application mail thesnapshots to authorized supervisors whenever the trigger eventoccurred. Of course, it would have been possible to write thisextension in assembly, but the team felt that in that particularinstance it was easier to write that extension in Java and hook itup with the ASM program. As I had earlier worked with ASM-orientedJNI, I knew this could be done and, indeed, the project wasimplemented quickly and successfully.I am sure there are many legacy applications written in assemblylanguage that could benefit from such add-ons. However, it is notonly for old applications in need of renovation that JNI can proveuseful. Although it may seem unlikely to some of us, assemblylanguage is still used for writing selected portions of newprograms.