1. Honours Project Investigation Report
Open Source Adobe
Lightroom like
Pierre – loic Chevillot
2011
A report submitted as part of the requirements for the degree of
BSc (Hons) in Computer Science
at
The Robert Gordon University, Aberdeen, Scotland
2. Declaration
I confirm that the work contained in this Honours project report has been
composed solely by myself and has not been accepted in any previous application
for a degree. All sources of information have been specifically acknowledged and
all verbatim extracts are distinguished by quotation marks.
Signed ………………………………………………………
Date………………………………
The declaration must be signed and dated by the student when each volume of the report is
submitted. The work will not be accepted for assessment unless the above declaration has
been included and signed.
Page 2
9. 2. Background
2.1.Image processing
Digital technologies have been massively developed during the last few years, and the fall of
hardware held digital technologies to be included in lots of large use hardware, like mobile
phones, laptops, cameras. One of the technologies, which know a massive deployment on
mobile platforms, is digital photography device. Photography device increase the capacity of the
sensor to take reasonable good and detailed picture on mobiles phones for example, and the
low cost of hardware put constructor to integrate cameras into most of actual market phones.
Imaging technology touch now a day a large user population, but this creates a need for low cost
or free applications to manipulate images, enhance the image quality, ease to share a image on
a social network, or the possibility to integrate images into other digital media.
The primary focus of this project is on image processing, and the enhancement on images
acquires by classical digital cameras. But image processing could be applied to a wide range of
fields:
o Robotic: one of the aim of computer science and engineering is build a robot with is
capable to navigate in his environment without any human intervention. The robot, and
its embedded computer must extract and find out the way to process images (as video
camera sequences, sensors, etc) to move in any environment. Image processing is
mainly used to extract information like “edge detection”, contrast enhancement, noise
removal, to permit the computer vision to have a 3D focus of the scene. One example of
this technology up‐to‐date is the “Mars Rover” which is autonomous on Mars, and
performs many tasks, without any help of humans.
o Medical diagnosis: Biomedical sector now use a lot of imaging devices to detect some
disease without being intrusive for the patient, like x‐ray, MRI, CT‐scan. These devices
give an image to the doctor that he has to analyze order to give the right treatment to
the patient. Image processing gives some tools to have a better definition of images
given by the medical devices, and help the doctor in his diagnosis. More the image
processing algorithms are powerful to gives a better definition of the images, more the
diagnosis could be right. It is now a big research sector for image processing.
Page 9
10. 2.2.Aims and Objectives of the Project
The main aim of this project is to provide a rich‐feature application and open source, which
could be executed as a standalone on any user computer to process images captured by digital
camera. This project will focus on common used features into processing software. The
application implements efficient algorithms for adjusting:
Brightness and Contrast,
Color balancing,
Color saturation
The application will provide to user rich information on the image to process, like the intensity
of the color contained in the image, and the different information store by digital camera into
the image file.
Image will be retrieved from disk in formats of industry standard, processed, and results saved
in the disk into industry standard formats.
The project will be integrate into a usability issue by implementing rich user interface features,
and allow the user to see immediate effect of his adjustment operation on the image.
2.3. Comparison of programming languages.
The application must be used on local machine, by any user. The better way to create an
application is to use an Oriented Object Programming (OOP) language. There are 3 main
Oriented Object programming languages well known: C#, C++, Java. Each of the languages has
some strength points and weak point.
C++: C++ was created in the 80’s to improve the C language (sequential programming
language). C++ is an evolved version of C, which introduce the concept of Object
programming (concept of programming with class), inheritance, and template. It offers
the possibility to work with C libraries, and the C++ normalized library, like “std” which
is the standard library normalized in 1998. It is very powerful because you don’t need
to go through some interface to put something in memory, or use the processor. This
language is very powerful for image processing because it doesn’t requires any fancy
layers to gives the data to process to the processor. And some library existing to work
on image processing which are powerful. There is a huge library for Graphical User
Interface developed originally by Trolltech, and now it is by nosier, named QT. This
library offers the possibility to create some rich User interface, with a lot possible
options which are free to use by developer. C++ is hard to use due to his old
Page 10
11. development, even if the language is always upgrade, but the last official ISO
normalization was in 2003. C++ used a lot of pointers, which is direct access to an object
at the memory address, and the development of image processing could be much
harder because it will use a lot of pointers to library functions, objects, and an error is
easy made which could corrupt all the application.
Java: Java is the most OOP language used for developing application in companies, and
educational systems around the world. This language was created in 1995, and created
by sun Microsystems (now Java is under Oracle Company who bought sun
Microsystems in 2009). Java has removed some of the subtle programming stuff of C++,
like multi‐inheritance, pointers and references. Java was created to be all Object
Oriented based. Java used a virtual machine to process his data. One of the big
advantages of Java is that it is multiplatform. The application could be launch form any
Operation System (OS) like Linux, Windows or Mac. It is independent of the computer
and the OS, due to the JVM (java virtual machine) which is a layer add on the operating
system to run java application. A lot of project use Java for image processing, as the
most known is maybe ImageJ. The thing is that Java Virtual Machine consumes lots of
resources on the computer, and the application will consume more. The application
must done the job require the quickest as possible, the virtual machine is a brake to
that.
C#: C# is the language created by Microsoft in 2001. This language was created to be
integrated in the .Net (dot net) platform, as the same point it provide a totally
independent Oriented Object Programming language. This language is a mix between
Java and C, C++. It provides the easy writing of Java, and re‐implements some useful
thing of C++. This language is based on the Framework .Net (which gives the possibility
to create application with many Microsoft languages, Visual, visual C++, and merge
them into a biggest one without any deployment problem). The Framework works with
CLR (common language runtime), which is relatively the same as the JVM of java, but
which is integrated in the last version of Microsoft Windows. C# has some portability on
other platform than Microsoft, it is call “project Mono”, which try to provide the .Net
framework for Linux/Unix platforms, under a GNU license.
The thing is to choose a powerful language, and easy writing language to create the project. Due
to the comparison, C++ and C# seems the more powerful, and quickest language to work,
because it didn’t require any further Virtual Machine launch and taken resources of the
computer. Then the easy writing is more provide by Java, but C# is a good compromise either
Page 11
12. due the IDE (integrated development environment) named Microsoft Visual Studio.
C# seems the better language as a compromise to work directly on the resources of the
computer, and the easy developing writing tool.
2.4.C#(C‐Sharp) and its environment:
C# was created in 2001 by Microsoft to integrate a powerful Oriented Object Programming
language in their .Net platform. C#, for Microsoft, has to be a simple and modern oriented
object programming language.
2.4.1. .Net architecture
.Net (dot net) was based on a architecture on Windows layer. This layer consists of a collection a
DLL (Dynamic Link Library), which could be incorporates into a project, or directly incorporate in
the last windows kernel, and just need to call them.
This Microsoft windows layer is composed of thousands of class, because as explained in the
previous section, C# is all object oriented. The class (included in dll’s), could be used by any
Microsoft languages as the same way to create an application. The classes provided are used
under the .Net executing environment called the runtime. The Run‐time .Net is in fact as the
virtual machine for Java, but it is integrated into the Microsoft environment. The run‐time .Net
provides services like:
Load and managed execution of applications.
Isolations of the application one to each others.
Translation of byte code to Unicode when the application is running, called Just In Time
Compiler.
Check of the accesses in the memory. No possible access out of the allocate area of the
application, and same for the array allocated in the application.
Memory management with a garbage collector.
Automated adaptation of the National characteristics (languages, numeric and symbol
representation, keyboard transcription, etc)
Gives a compatibility with COM modules, but not managed by .Net framework. The
COM modules are simple computer based around a micro‐controller and some RAM,
with a small physical size and low power consummation, created to work on embedded
system.
Page 12
15. Silverlight is a client side web application. Silverlight will deploy the application on the
client computer. C# will be used to interact with the client. The application
communicates with the server, but without using the resource of it, just requires the
information the client needs when using the application. Silverlight uses XAML and WPF
as well to provide events, buttons, etc..
C# is a very young language, but it already has a lot of different versions. Each version adds a
valuable evolution to the language, and to the .Net Architecture. The version 1 and 2 are mostly
the integration of new functionally and the development of C# to the .Net Architecture. Then
since the version 3, in 2007, Silverlight and Rich Internet application comes into the language.
And now on the version 4, since 2010, the most important innovation is the parallel treatment
on multi‐core computer.
2.4.3. Visual studio
Visual Studio is an IDE (Integrated Development environment) created by Microsoft for its .Net
Framework.
Visual Studio gives to developer the possibility to create console application, as rich graphical
user interface application, as Web sites, services or applications. This IDE gives the developer
the choices of the language he wanted to use from the .Net compatible language.
As a classical IDE, visual has a debugger included. This debugger is not focus on one language,
but was develop to debug some cross language application which could be develop on the .Net
architecture. But for the Project, I will only need to use the debugger on C# environment.
One of the good tools of Visual Studio is “IntelliSense”. This tool is the Microsoft auto‐
completion tool, and integrates a short description of Types in a pop‐up window. During the
writing, IntelliSense could suggest some statement to help the developer.
There is another tool to have a better organization of the source code, it is some define region.
You could name some regions of your code inside #region name … #endregion. Like that it
regroups you function inside a sub list you could display or hide if you need it.
Page 15
17. 3. Analyze
3.1.Image Processing
3.1.1. Theory:
Image processing is the technology that uses algorithms on images, for different purposes. This
technology uses the base of the Imaging Technology, to translate the optic view of an image into
Digital image. This digital image could be treat by some image processing algorithm afterwards.
For a human, an image view by the eyes is a multitude of light point. In the retina, Rods and
Cones are two major light sensitive photoreceptor cells. These two cells give to human a certain
light and colors sensibility. Rods are more use by the body for the low light sensibility, it is
responsible of the monochrome vision we could sense when we are in a room with no light, but
we can navigate through because we felt the obstacle. Cones are colors sensitive. Each cone is
sensitive of a range of the wavelength of the spectral representation of the light. (Figure 1)
Figure 1
In white curves, it is the sensibility of the cones, and the black one is the sensibility of the rods.
The computer representation is made by Pixel. A Pixel is a representation of point of light. It is
encoded on 32bit, but only 24bits are use for the color representation, 8Bits for each Red colors,
Green and Blue. The last 8bits could be either not use, or use to represent to transparency of
the image, this is call the alpha channel.
Page 17
18. 3.1.2. Color space definition:
RGB:
The color space RGB (Red, Green, and Blue) could be easy understood because all possible
colors could be made from the three primary colors. This color space model is become the main
model for computer graphics. Two color spaces inherit from the basic model of RGB are now
mostly use in monitors (LCD, phone, etc)
Adobe RGB: design in 1998. This color space is designed to restore the color which to be
achieved on printers, on computer display.
sRGB: developed in 1996 by the collaboration of Microsoft and HP to be use on
monitors, printers and Internet. This design is now the default color space for many
devices such as printers, monitors, phone, video camera, etc.
The colors we perceive are not a mix of the three components, but more like a combination of
light intensity, and coloration. The coloration gives the shade, and the saturation. The shade is
the color perceives, like blue, cyan, yellow, and the saturation is the purity of this shade, which
could be gray for a very saturation, and maximum for pure color. That is the reason of if we
increase the brightness of a scene, it produce a proportional increase of the reflected light by
the object of the scene for the wavelength. So for this example the component RGB will be
multiply by this constant, but the saturation and shade will not change.
The Luminance is the intensity of light received. This is normalizing by the ICE (International
Commission of Illumination) as the codification of Red, Green and Blue lights as a combination
of each of them:
Y = 0.2125 R + 0.7154 G + 0.0721 B
The addition of the 3 coefficients is equal to 1, but there is a different between each coefficient.
This explains the different between the perceptions of for each luminance. A green light will
seem brighter than a red one, and more than a blue one.
But in the encoding of a color, it needs to add a complementary data to the luminance. This
complementary data is the Chrominance. The Chrominance is linear combinations of 2 numbers
Page 18
20. 4. System Design
4.1.Existing proprietary products
Now days, a lot of different applications were created to give some tools to edit image for any
user of a digital device. This application provides lots of different tools, and some applications
are focus to be use by professional of photography, or by graphic editors.
4.1.1. Adobe Photoshop:
Photoshop is a huge application for image processing, and maybe one of the most knew. This
software has a lot of image algorithm library included. It is focus to be use by professional, but
most of users are none professional.
Photoshop has basic image processing tools as easy cropping tool, colors modification, and
filters to apply. It is really easy to use these basics tools due to a good graphical interface given
by Adobe. But more of the power of Photoshop could be find in the menu tool bar. This
regroups a lot of vary algorithms to be use to create, modify or add effects on images.
Creation of images could be done from scratch. Photoshop works with masking tool. The mask is
a layer where graphics objects are drawn, and overlay one to each other. Effects could be done
on one mask, or a set of masks. And different light effects could be done on the overlay masks.
Adobe Photoshop is one of the most powerful application, but it is most to use for professional
people which work on graphic of photography montage.
4.1.2. Gimp:
Gimp is the open source version of Photoshop. The developing team tries to provide the same
functionally as the Adobe one. Gimp is a good solution to make graphic and image processing on
Operating System as Linux, where Adobe products couldn’t be use.
Gimp provides a wide range of good tools for image processing, as Adobe Photoshop. But it is
very difficult to use. But the software is free to use, and always update by the developing team.
4.1.3. Adobe Lightroom:
Adobe Lightroom is an image processing software creates for professional photographers. This
software is creating to simplify the post‐processing of their work.
Lightroom is creating to manage a library of images, adjust images and exporting on printer or
publish on a website gallery. The import tool of the software gives the possibility to select
images to import before importing them on the computer.
Page 20
22. Some modification tools will be providing as:
Brightness modification
Contrast adjustment
Saturation level adjustment
Color filter adjustment
4.3.Design
4.3.1. Lifetime of the project
The project is made on the plan of prototyping and evaluation. This plan intends to create a first
quick prototype with some basic functionality, then evaluate this prototype and debug it. When
the evaluation is reasonably good and efficient, a new prototype is created with more
functionality, and the process of evaluation restart. This process of prototype, evaluation could
be done until a prototype is enough evolve to be release as final project.
In the time life of this project during honors year, some prototypes were create and evaluated
before going on more feature. The project on the day of the presentation will be a part of the
evaluation phase, on the threading implementation.
4.3.2. UML
The creation of diagrams is made in Visual Studio 2010. This could be made by adding a
modeling project, and in this project, Visual Studio could create projects as UML diagram.
Diagrams are store as a part of the project, and could help anyone who comes on the project
without have external source to understand the basic architecture of the project.
4.3.2.1. Use case diagram:
The user will require this application to make some simple actions. All the action a user could do
on the application could be draw in a use case diagram. The use case will represent all the action
that could be done by a user, without taking part of the software engineering, and the
technologies use in the project.
Page 22
23. Figure 3
User can do 2 main tasks, has Information, or made adjustment. In the information section, the
user could have information about the EXIFF of the picture, or read a histogram. In the section
part, the user could do 3 actions, adjust the brightness, the contrast or the saturation. For the
saturation, the global and the specifics color are 2 options inherit from the main action.
4.3.2.2. Class diagram:
The project will be split into the application and the adjustment library. This library will be store
in an additional library, compile as a DLL (dynamic link library). This
Page 23
24. Figure 4
Page 24
25. 5. Implementation
5.1.User Interface
The project is design to be use by everyone. The user could be a professional of photography, or
a standard user who want to adjust one of his pictures.
The main requirement for a good graphical interface is a clear view of the possibility provide by
the application. On the first launch of the application, the user has to quickly understand where
he needs to act on the interface to make what he wants.
The interface is design to be easy understandable at the first view of any user. For that, some
paper drafts were drawn. An evaluation of drafts was done, and one interface was selected to
be tried on the Graphical User Interface designer of Visual Studio 2010. (Figure 5)
Figure 5
The design is compose in the main area of a Picture Box component. The Picture Box element is
a graphical element in C# designer, to load a picture into it, and display to the user. It could load
different types of images, as .bmp (bitmap), .jpg and .png.
On the left of the interface, a tab panel is implemented. This panel could display information
about the pictures, and modification tools, depend of the selected tab. There are actually 4 tabs.
The first one is use to display information about the image (Exiff) and the histogram of the
Page 25
27. The picture object contains a bitmap, and the difference information that the image could have,
as EXIFF.
5.1.2. Save of a modify image
The application saves the image in JPG format. The simplest way to save the last modify image is
to take it from the main picture displayed and give it to SaveFileDialog tool to save it as a JPG
image.
5.2.EXIFF of the picture
Exiff of a picture are information store into the picture. Information could be much diversified,
like the information of how the picture was taken, or the different parameters of the camera
who take the picture, etc.
All the information are store as metadata. The metadata is a simple information store at a
specific place in the file. Each metadata is encoded at exactly the same place in JPG file. Each
byte representing the metadata in the memory of the JPG file will be a property of the Image
class in C#. To sort out the properties we are interested in, we need to how exactly what address
corresponding to which property, and how it is encode. The project focus on only some specific
properties regroups in the spreadsheet bellow. These elements are the main elements a
photographer, or people interested in photography want to look at.
Property name Address in JPG file Encoding
Focal Length 37386 Int16
ISO Speed 34855 Int16
Aperture 33437 Int16
Exposure Time 33434 Int16
Creation Date 306 Ascii
Make of the Camera 271 Ascii
Model of the Camera 272 Ascii
Metering mode 37383 Int16
Orientation 274 Int16
A picture taking by any camera is a combination of 4 parameters (Focal length, Aperture,
Exposure Time, ISO speed). These parameters regroup optical techniques, and electronicals
Page 27
31. 5.3.1. Brightness and contrast modification
The contrast modification will be the coefficient by how the component will be multiply, and the
brightness will be the sum to add to the component.
float v = 0;
v = (float)(cValue + 100) / 100f;
float m = 0.5f * (1.0f ‐ v) + (float)bValue / 100f;
/// The matrix is compose of 4 lines for Red, Green, Blue parameters
/// then the line for Alpha parameter
/// and the identity line which is use to multiply the matrix
cm = new ColorMatrix(new float[][]{
new float[]{v,0,0,0,0},
new float[]{0,v,0,0,0},
new float[]{0,0,v,0,0},
new float[]{0,0,0,1,0},
new float[]{m,m,m,0,1}
});
ImageAttributes ia = new ImageAttributes();
ia.SetColorMatrix(cm);
The cValue in the previous example code represent the contrast value given on the track bar,
and the bValue the brightness value on the brightness track bar. The contrast value is translated
in a first time as a value to multiply on the color component. If the contrast value is 75, colors
component is multiply by 1.75. Then the brightness will be add. The brightness value is compute
as a number between ‐1.0 to 1.0. If the contrast is change in the same time, the value of the
contrast is taken in the algorithm. This change the brightness by the half of the difference
between the vector of the contrast to the normal contrast vector. (0.5f * (1.0f ‐ v))
5.3.2. Saturation
The color saturation is used to describe the intensity of color in an image. Most of saturated
images have overly bright color. De‐saturation of a color translates the color image into a
monochrome one. The saturation algorithm use 3 weighs value given by adobe as the luminance
value for the color. The color green is highly recognizable by human eyes, so it luminance value
is high, blue is the color that human eyes has the less perception, and red is in the middle range.
The sum of the three values makes 1. The weight must be add the value of the saturation given
Page 31
32. by the track bar. Color saturation is weight and the complementary color has to be weight to.
The all matrix will be multiply.
The main component of the color is weigh by the luminance, and then the saturation vector is
apply in addition. Complementary colors are only weight by luminance vector.
float red = (float) 0.3086;
float green = (float) 0.6094;
float blue = (float) 0.0820;
float sat = (float) (satValue + 100) / 100f ;
float redSaturation = (1 ‐ sat) * red + sat;
float redSaturationComp = (1 ‐ sat) * red;
float greenSaturation = (1 ‐ sat) * green + sat;
float greenSaturationComp = (1 ‐ sat) * green;
float blueSaturation = (1 ‐ sat) * blue + sat;
float blueSaturationComp = (1 ‐ sat) * blue;
cm = new ColorMatrix(new float[][]{
new float[]{redSaturation,redSaturationComp,redSaturationComp,0,0},
new float[]{greenSaturationComp,greenSaturation,greenSaturationComp,0,0},
new float[]{blueSaturationComp,blueSaturationComp,blueSaturation,0,0},
new float[]{0,0,0,1,0},
new float[]{0,0,0,0,1}
});
5.3.3. Specific color saturation
The color filter not given the results wanted. The algorithm gives the possibility to apply a color
filter on the image.
Normally to saturate a specific color, the channel of this color has to be multiply by a vector, and
the others channel not changes. And the de‐saturation has to apply a filter where the others
colors can go through, but not the color we want to de‐saturate. The human vision is a
perception of color, if a filter is place before eyes analyze the color percepted, one color could
be remove (filter).
Page 32
33. Figure 7
There are 3 filters to block primary colors:
Cyan : block Red, let Blue and Green
Yellow : block Blue, let Red and Green
Magenta : block Green, let Red and Blue
float blue = (float)0.0820;
float sat = (float)(satValue + 100) / 100f;
float blueSat = (1 ‐ sat) * blue + sat;
float bComp = (1 ‐ sat) * blue;
cm = new ColorMatrix(new float[][]{
new float[]{1,0,0,0,0},
new float[]{0,1,0,0,0},
new float[]{0,0,1,0,0},
new float[]{0,0,0,1,0},
new float[]{0,0,blueSat,0,1}
});
ImageAttributes ia = new ImageAttributes();
ia.SetColorMatrix(cm);
5.4.Histogram
One of the ways to evaluate if an image is good, is to check the histogram of the image. The
histogram will be the statistics of the intensity of the color present in the image by a graphical
way. Histogram could also be to show the brightness of a image for monochrome picture.
Drawing a histogram require to calculate some statistics as pre‐processing. A pixel stores a color
component as a value of 8bit, so the value for a color varies from 0 to 255. (28 = 256) A value of
0 in the color represents a total absence of this color, and 255, that the color reaches it
maximum intensity. The black color is the absence of any of the 3 primary component, so it will
be encode Red = 0, Green = 0, Blue = 0. And the inverse for the white, but for the color blue the
Page 33
36. will be put into wait state, and when the flag become free, only one thread could
acquire it.
The project implements another type of synchronism with a semaphore. The semaphore is a
simple object with only a value in it initially put a 0. This class is share between thread, and
thread could try to acquire the value, or release a value in the semaphore. These two methods
are protected with into a critical section with the class monitor.
When a thread want to acquire the semaphore, the semaphore will give him the value (in fact
reduce by one the value in the Semaphore), but if the value include in the semaphore is 0, the
semaphore put the asking thread into waiting state until a thread release a value. The Release
method enter in the critical section to add a value, and then signal to waiting thread on the
semaphore that a new value just has been posted, and them to wake up.
class Semaphore
{
private int val;
public Semaphore()
{
val = 0;
}
public void acquire()
{
Monitor.Enter(this);
while (val == 0)
{
Monitor.Wait(this);
}
val‐‐;
Monitor.Exit(this);
}
public void release()
{
Monitor.Enter(this);
val++;
Monitor.Pulse(this);
Monitor.Exit(this);
}
}
The synchronism has to be between the painting handler thread, and the histogram creator
thread. Like the creation of the histogram must be signal, the histogram creation thread call the
release method after the end of the drawing histogram function, when the histogram image is
ready to be picking up by the handler. The handler thread during this time is trying to acquire
the semaphore, but until the histogram is not ready, this handler is put in wait state. When the
Page 36
38. 6. Evaluation
6.1.System testing
Application is develop on the base of an evolving prototype. The system is test all along the
develop life time. This technique allow to discover some bugs, or errors of design made, and
quickly change them, without crash the all application, or change the development plan.
The first set of tests was made on the design of the graphical interface. Test tried to evaluate the
interface, and see if any elements could be better organize. Tests were made with different
unitary console write to test the action performed on the interface.
The second set of tests was made after the implementation of the EXIFF print tool. EXIFF are
place in the same place for each jpg image, so for any photography take with a DSLR, a compact
or a phone device, the image should includes EXIFF. Tests work well especially on DSLR and
compact photography device. The aperture made only wrong value for under F/2, for example
an aperture of f/1.8 is transcribe as f/.9.The other things coming back during the test is that not
all the device register EXIFF, for example some pictures from camera on phone doesn’t have any
EXIFF, and a null value is now set is no EXIFF is found.
Third set of test is on the Adjustment library. The library must product the adjusted image
quickly as possible, but also has a good render on the display image. The brightness and contrast
algorithm give back an image as the potential of some commercial software like “Adobe
Lightroom”, as the same for the saturation algorithm, which on human perception is exactly the
same render.
Last set of tests, is on the implementation of the histogram tool, and the threading of the
application. The histogram was working without threading, but stops the application during a
time of around 10sec, and when it needs to be recomputed it was the same. That’s the main
reason of the threading of the application. The histogram calculation has not to disturbed the
normal utilization of the whole software. Threading the application is made with lots of different
try to find the best way to thread, with a lot of time spent of interblocking thread without
blocking the GUI. This includes the usage of Mutex, background worker utilization possibility.
Threads interblock each other without causing any deadlock, but the delegate parts not
working. The delegate implement is really new in the development life time, this is find when
threads were working properly, but no histogram is display to the user.
Page 38