In this thesis, we have used randomly generated workloads to evaluate the performance of proposed fair resource allocation algorithms (MLF-DRS, FFMRA, H-FFFMRA). The produced workloads have been directly fed into the simulation environment, considering different metrics such as resource allocation, utilization, and fairness. A sample CSV file has been provided that illustrates how the raw data generated by the mathematical models look (workload_0). This is a sample generated workload based on a single task with various resource demands in time-series experiments. The generated workload is used to determine allocation with respect to other tasks, submitted to the system. Depending on, how many incoming tasks are in the system, there will be numerous files like workload_0 (such as workload_1, workload_2, …, and so on). In all experiments output, the first column represents the time in which the experiments have been conducted. The file names have been carefully specified so that anyone can easily identify the related data. For the MRFS algorithm, we have used Google traces which is an open dataset for research and experiment purposes. Since the size of the dataset is too large (40.1 GB), the dataset has been used during the experimentation, referring to the link of the dataset built-in within the cloudsim framework (A tool for simulating cloud environments). Hence, the dataset has been read during the experiments. It is worth mentioning that the dataset is available to download through the google storage service using the following link: cluster-data/ClusterData2011_2.md at master · google/cluster-data · GitHub Furthermore, all experiments’ outputs have been categorized in dedicated folders to make them easier to find. All source codes for the algorithms have been included under the folder "SOURCE_CODES" with corresponding guide as a "README.txt" file.