1GRAPH
1. Create a Column Chart showing the amount due by each of the Customers
2. Move the Graph to a new worksheet
a. Call this new worksheet Total Due
b. Color the tab BLUE
3. Enter your name and Student ID # in the Header of the worksheet
4. Complete the Page Properties
a. Include your name
b. Tags (use a minimum of 3, separated by commas)
Pros and Cons of Open Source Software
Despite the importance of open source, OSS is not without its share of challenges. Let us take an
objective look at the pros and cons of open source software to decide if it is right for you:
Pros of open source software
The benefits of open source software are:
1. Lower cost
Open source projects are typically developed and maintained by a community of volunteers who share
their skills and knowledge for free. This means anyone can access and use the software without paying
hefty licensing fees or upfront costs.
Moreover, since the source code is available to everyone, businesses can modify and customize the
software according to their specific needs, eliminating the need for expensive proprietary software as a
service (SaaS) solutions.
2. Ease of integration
Unlike closed source software, which often requires proprietary application programming interfaces
(APIs) and protocols, open source software promotes interoperability and compatibility with other
systems. This means developers can seamlessly integrate different components or modules from various
sources without facing major hurdles.
3. No vendor lock-in
With OSS, there is no vendor lock-in. This means users are not tied to a single provider or limited by
proprietary formats or protocols. Developers are free to modify and customize the software according
to their specific needs without any restrictions imposed by a single vendor.
Additionally, it ensures that users can easily switch between different vendors if they find one better
suited to their requirements or if they encounter any issues with the current provider. This flexibility
ultimately leads to cost savings and improved efficiency in the long run.
4. Greater flexibility
Developers have access to the underlying source code, giving them the freedom to make any necessary
changes or enhancements. This flexibility empowers organizations and individuals to tailor the software
to meet their unique requirements, whether adding new features or integrating with other systems. The
ability to customize open source software makes it a versatile solution for various applications across
various industries.
This inherent flexibility in open source software ensures that businesses can build agile IT infrastructures
that evolve alongside their changing needs—a valuable advantage that closed source alternatives may
not offer.
5. The power of community
Unlike closed source software, where development and improvement are limited to a specific team or
company, open source projects benefit from a global community of developers, users, and enthusiasts.
The power of community lies in its ability to harness the collective intelligence of passionate individuals
who share a common goal: creating high-quality software accessible to all.
Cons of open source software
Open source software, despite its many advantages, also has some drawbacks that users should be
aware of:
1. No prompt support
Open source software often relies on community forums and user-driven discussions for
troubleshooting and problem-solving. While these forums can be helpful in many cases, they may not
always provide immediate or timely responses to users’ queries or issues. This can be frustrating for
individuals or businesses requiring urgent technical assistance.
It’s important to consider this aspect before choosing open source software, especially if your
organization requires a high level of technical support and quick resolution for any potential problems
that may arise during usage.
2. Complexity and difficult learning curve
Unlike closed source software, which often comes with user-friendly interfaces and comprehensive
documentation, open source software might require some technical expertise to navigate effectively.
The sheer flexibility and customization options offered by open source software can lead to higher
complexity. Users may need to invest time in understanding the intricacies of the software’s framework
and architecture, configuration settings, and dependencies. This can be daunting for those unfamiliar
with coding or with limited technical knowledge.
3. Limited features
Open source software (OSS) offers numerous benefits, but it’s important to acknowledge that there can
also be limitations. One of these is the potential for limited features in certain OSS projects. This is
because OSS development relies on community contributions and volunteer efforts, which may result in
prioritizing core functionalities over additional features.
4. High maintenance needs
Since these projects rely on community contributions and updates, they require regular upkeep to
ensure compatibility and security. This includes regularly monitoring for updates, bug fixes, and security
patches. Open source applications can become vulnerable to security breaches or compatibility issues
with other software components without proper maintenance.
Additionally, because multiple contributors are involved in the development of open source projects,
coordinating updates can be more complex compared to closed source alternatives, where a single
vendor oversees all aspects of maintenance.
5. Less stable builds
The open nature of OSS means that anyone can modify and update the code, which can lead to
unintended consequences. As a result, users may experience occasional crashes or glitches while using
open source applications. However, passionate developers within the community work diligently to
address these issues and release patches and updates to improve stability over time.
Open Source Software Examples
There are numerous examples of open source software that have made a significant impact in the tech
industry. These include:
VLC Media Player: VLC Media Player, developed by VideoLAN, is a highly popular open source media
player that supports a wide range of audio and video formats. With its humble beginnings in 1996 as an
academic project at École Centrale Paris, VLC has become one of today’s most widely used media
players.
GIMP: GIMP, short for GNU Image Manipulation Program, is a powerful open source image editing
software. The history of GIMP dates back to 1995 when Spencer Kimball and Peter Mattis developed the
software as a class project at the University of California, Berkeley. Over the years, it has evolved into a
highly versatile tool used by graphic designers, photographers, and artists worldwide.
Mozilla Firefox: Mozilla Firefox is a popular web browser that has gained immense popularity among
users worldwide. It was created by the Mozilla Foundation and was initially released in 2004 as an open
source project.
Linux: Linux is a powerful open source operating system that has revolutionized the computing world.
Developed by Linus Torvalds in 1991, Linux was initially created as an alternative to proprietary
operating systems like Windows and macOS. It is based on the Unix operating system and is designed to
run on a wide range of devices, from servers to Raspberry Pi.
Apache HTTP Server: The Apache HTTP Server, commonly known as Apache, was created by a group of
developers in 1995 to provide a robust and reliable platform for hosting websites. Since its inception, it
has become one of the most popular web servers globally, commanding over half of the market share.
Takeaway
An Introduction to Linux
Introduction
The Linux operating system is an extremely versatile Unix-like operating system, and has taken a clear
lead in the High Performance Computing (HPC) and scientific computing community. Linux is a multi-
user, preemptive multitasking operating system that provides a number of facilities including
management of hardware resources, directories, and file systems, as well as the loading and execution
of programs. A vast number of utilities and libraries have been developed (mostly free and open source
as well) to accompany or extend Linux.
There are two major components of Linux, the kernel and the shell:
1. The kernel is the core of the Linux operating system that schedules processes and interfaces directly
with the hardware. It manages system and user I/O, processes, devices, files, and memory.
2. The shell is a text-only interface to the kernel. Users input commands through the shell, and the kernel
receives the tasks from the shell and performs them. The shell tends to do four jobs repeatedly: display
a prompt, read a command, process the given command, then execute the command. After which it
starts the process all over again.
It is important to note that users of a Linux system typically do not interact with the kernel directly.
Rather, most user interaction is done through the shell or a desktop environment.
Unix
The Unix operating system got its start in 1969 at Bell Laboratories and was written in assembly
language. In 1973, Ken Thompson and Dennis Ritchie succeeded in rewriting Unix in their new language
C. This was quite an audacious move; at the time, system programming was done in assembly in order to
extract maximum performance from the hardware. The concept of a portable operating system was
barely a gleam in anyone's eye.
The creation of a portable operating system was very significant in the computing industry, but then
came the problem of licensing each type of Unix. Richard Stallman, an American software freedom
activist and programmer recognized a need for open source solutions and launched the GNU project in
1983, later founding the Free Software Foundation. His goal was to create a completely free and open
source operating system that was Unix-compatible or Unix-like.
Linux
In 1987, the source code to a minimalistic Unix-like operating system called MINIX was released by
Andrew Tanenbaum, a professor at Vrije Universiteit, for academic purposes. Linus Torvalds began
developing a new operating system based on MINIX while a student at the University of Helsinki in 1991.
In September of 1991, Torvalds released the first version (0.1) of the Linux kernel.
Torvalds greatly enhanced the open source community by releasing the Linux kernel under the GNU
General Public License so that everyone has access to the source code and can freely make
modifications to it. Many components from the GNU project, such as the GNU Core Utilities, were then
integrated with the Linux kernel, thus completing the first free and open source operating system.
Linux has been adapted to a variety of computer systems of many sizes and purposes. Furthermore,
different variants of Linux (called Linux distributions) have been developed over time to meet various
needs. There are now hundreds of different Linux distributions available, with a wide variety of features.
The most popular operating system in the world is actually Android, which is built on the Linux kernel.
Why Linux
Linux has been so heavily utilized in the HPC and scientific computing community that it has become the
standard in many areas of academic and scientific research, particularly those requiring HPC. There have
been over 40 years of development in Unix and Linux, with many academic, scientific, and system tools.
In fact, as of November 2017, all of the TOP500 supercomputers in the world run Linux!
Linux has four essential properties which make it an excellent operating system for the science
community:
Performance – Performance of the operating system can be optimized for specific tasks such as running
small portable devices or large supercomputers.
Functionality – A number of community-driven scientific applications and libraries have been developed
under Linux such as molecular dynamics, linear algebra, and fast-Fourier transforms.
Flexibility – The system is flexible enough to allow users to build applications with a wide array of
support tools such as compilers, scientific libraries, debuggers, and network monitors.
Portability – The operating system, utilities, and libraries have been ported to a wide variety of devices
including desktops, clusters, supercomputers, mainframes, embedded systems, and smart phones.
Files and Processes
Everything in Linux is considered to be either a file or a process:
A process is an executing program identified by a unique process identifier, called a PID. Processes may
be short in duration, such as a process that prints a file to the screen, or they may run indefinitely, such
as a monitor program.
A file is a collection of data, with a location in the file system called a path. Paths will typically be a series
of words (directory names) separated by forward slashes, /. Files are generally created by users via text
editors, compilers, or other means.
A directory is a special type of file. Linux uses a directory to hold information about other files. You can
think of a directory as a container that holds other files or directories; it is equivalent to a folder in
Windows or macOS.
A file is typically stored on physical storage media such as a disk (hard drive, flash disk, etc.). Every file
must have a name because the operating system identifies files by their name. File names may contain
any characters, although some special characters (such as spaces, quotes, and parenthesis) can make it
difficult to access the file, so you should avoid them in filenames. On most common Linux variants, file
names can be as long as 255 characters, so it is convenient to use descriptive names.
Files can hold any sequence of bytes; it is up to the user to choose the appropriate application to
correctly interpret the file contents. Files can be human readable text organized line by line, a structured
sequence only readable by a specific application, or a machine-readable byte sequence. Many programs
interpret the contents of a file as having some special structure, such as a pdf or postscript file. In
scientific computing, binary files are often used for efficiency storage and data access. Some other
examples include scientific data formats like NetCDF or HDF which have specific formats and provide
application programming interfaces (APIs) for reading and writing.
The Linux kernel is responsible for organizing processes and interacting with files; it allocates time and
memory to each process and handles the file system and communications in response to system calls.
The Linux system uses files to represent everything in the system: devices, internals to the kernel,
configurations, etc.
Shells
A variety of different shells are available for Linux and Unix, each with pros and cons. While bash
(updated version of sh) and tcsh (descended from C-shell/csh) are the most common shells, the choice
of shell is entirely up to user preference and availability on the system. In most Linux distributions, bash
is the default shell.
The purpose of a shell is to interpret commands for the Operating System (OS) to execute. Since bash
and other shells are scripting languages, a shell can also be used for programming via scripts. The shell is
an interactive and customizable environment for the user.
All examples in this tutorial use the bash shell.
Text Editors
A text editor is a tool to assist the user with creating and editing files. There is no "best" text editor; it
depends on personal preferences. Regardless of your typical workflow, you will likely need to be
proficient in using at least one common text editor if you are using Linux for scientific computing or
similar work. Two of the most widely used command-line editors are Vim and Emacs, which are
available via the vim and emacs commands, respectively, in systems where they are installed.
Linux File System
A file system is a structured method of storing and managing data—including files, directories, and
metadata—on your machine. Think of it like a library. If thousands of books were scattered around,
finding one would be hard. But in an organized structure, like labeled shelves, locating a book becomes
easy.
This article aims to simplify the complexities of Linux file systems, guiding beginners through their layers,
characteristics, and implementations. By shedding light on these nuances, we empower users to make
informed choices in navigating the dynamic landscape of Linux operating systems.
What is the Linux File System
A Linux file system is a set of processes that controls how, where, and when data is stored or retrieved
from storage devices. It manages data systematically on disk drives or partitions, and each partition in
Linux has its own file system because Linux treats everything as a file, including devices and
applications.. Like Windows uses C: and D: drives, Linux uses mount points, but everything appears
under the root / directory. In Linux, everything is treated as a file, including devices and applications.
Linux File System Structure
The architecture of a file system comprises three layers mentioned below.
1. Logical File System:
The Logical File System acts as the interface between the user applications and the file system itself. It
facilitates essential operations such as opening, reading, and closing files. Essentially, it serves as the
user-friendly front-end, ensuring that applications can interact with the file system in a way that aligns
with user expectations.
2. Virtual File System:
The Virtual File System (VFS) is a crucial layer that enables the concurrent operation of multiple
instances of physical file systems. It provides a standardized interface, allowing different file systems to
coexist and operate simultaneously. This layer abstracts the underlying complexities, ensuring
compatibility and cohesion between various file system implementations.
3. Physical File System:
The Physical File System is responsible for the tangible management and storage of physical memory
blocks on the disk. It handles the low-level details of storing and retrieving data, interacting directly with
the hardware components. This layer ensures the efficient allocation and utilization of physical storage
resources, contributing to the overall performance and reliability of the file system.
Together, these layers form a cohesive architecture, orchestrating the organized and efficient handling
of data in the Linux operating system.
Architecture Of
a File System
Characteristics of a File System
A file system defines the rules and structures for how data is organized, stored, accessed, and managed
on a storage device.
Space Management: How the data is stored on a storage device. Pertaining to the memory blocks and
fragmentation practices applied in it.
Filename: A file system may have certain restrictions to file names such as the name length, the use of
special characters, and case sensitive-ness.
Directory: The directories/folders may store files in a linear or hierarchical manner while maintaining an
index table of all the files contained in that directory or subdirectory.
Metadata: For each file stored, the file system stores various information about that file's existence such
as its data length, its access permissions, device type, modified date-time, and other attributes. This is
called metadata.
Utilities: File systems provide features for initializing, deleting, renaming, moving, copying, backup,
recovery, and control access of files and folders.
Design: Due to their implementations, file systems have limitations on the amount of data they can
store.
Linux File Systems:
Here are some linux file systems:
Types of File
System in Linux
1) ext (Extended File System):
Implemented in 1992, it is the first file system specifically designed for Linux. It is the first member of the
ext family of file systems.
2) ext2:
The second ext was developed in 1993. It is a non-journaling file system that became known for its
efficient handling of flash drives and SSDs. It solved the problems of separate timestamp for access,
inode modification and data modification. Due to not being journaled, it is slow to load at boot time.
3) Xiafs:
Also developed in 1993, Xiafs was developed as an alternative but lacked the power and functionality of
ext2. Due to limited features and scalability, it is no longer in use.
4) ext3:
Introduced in 1999, ext3 brought in journaling capabilities, offering improved reliability. Unlike ext2, it
avoided long boot-time checks after an improper shutdown. It also supported online file system growth
and HTree indexing, making it efficient for large directories.
5) JFS (Journaled File System):
First created by IBM in 1990, the original JFS was taken to open source to be implemented for Linux in
199 it is Known for its ability to perform well under varied loads JFS performs well under different kinds
of load but is not commonly used anymore due to the release of ext4 in 2006 which gives better
performance.
6) ReiserFS:
It is a journal file system developed in 2001. Despite its earlier issues, it has tail packing as a scheme to
reduce internal fragmentation. It uses a B+ Tree that gives less than linear time in directory lookups and
updates. It was the default file system in SUSE Linux till version 6.4, until switching to ext3 in 2006 for
version 10.2.
7) XFS:
XFS is a 64-bit journaling file system and was ported to Linux in 2001. It now acts as the default file
system for many Linux distributions. It provides features like snapshots, online defragmentation, sparse
files, variable block sizes, and excellent capacity. It also excels at parallel I/O operations.
8) SquashFS:
Developed in 2002, this file system is read-only and is used only with embedded systems where low
overhead is needed.