In recent decades, heterogeneous parallel and distributed systems such as Grids and Clouds are commonly used environments for the execution of large-scale applications. The concept of the workflow has emerged as an attractive paradigm to model large-scale scientific applications in such systems. Scientific workflows can be viewed as a parallel computing technique for orchestrating complex and huge computations for data analysis and simulation across many scientific domains. To execute workflow applications efficiently in distributed heterogeneous systems, the scheduling of activities and data transfers is a key problem. Scheduling usually could be viewed in two different perspectives: local and global schedules. Local scheduling is the scheduling of activities within a site or resource, while global scheduling considers the scheduling of activities across the whole distributed system. The concentration of the present research is on global scheduling. When considering some set of objective functions, such as the overall completion time (makespan), the scheduling problem is NP-complete. Due to the problem complexity, we rely on near-optimal solutions that are attained by heuristics which we propose in order to approximate optimal solutions. Typically, different actors within decentralized heterogeneous environments (e.g. users and providers) view the problem from different perspectives. Scientists are usually interested in minimizing the execution time of their application, whereas system administrators are often interested in maximizing the resource utilization, job throughput, or user fairness. Real-world scheduling problems are therefore frequently confronted with a multi-objective scenario in which many of these important objectives are conflicting. For example, powerful processors are typically rented by Cloud computing providers at a higher price, consume more energy, and may become unreliable due to the large number of requests and due to user contention. Nowadays, related scheduling work is typically restricted to optimizing one objective (usually makespan), while some isolated approaches tried to optimize across two criteria. A generic scheduling framework and heuristic-based algorithms for optimizing multiple conflicting criteria are still missing. For example in a particular Grid environment, the main challenges might be reliability and load. Grid sites are usually managed by different administrations, and its resource availability might change at runtime. Moreover, due to the autonomous and decentralized nature of Grids, resources available to a given user in the near future are unpredictable. In IaaS (Infrastructure as a Service) Clouds, e.g. Amazon EC2, we may ignore the reliability and external loads of resources; rather, we are face the challenge of on-demand resource provisioning. In commercial IaaS Clouds, the computing system has no predefined topology, thus users are allowed to arrange the computing resources on-demand, as they wish. The pricing model of such Clouds amidst the set of scheduling constraints will increase the complexity of scheduling applications in such environments. In this dissertation, the focus is on the proposal of polynomial-time, multi-objective scheduling algorithms, specifically in their use with the execution of scientific workflow applications within heterogeneous distributed systems, and particularly within Clouds. The goal is to schedule workflow activities and data transfers in an efficient way, employing both static and dynamic approaches to fulfill the set of scheduling objectives. Several important objectives, including application-makespan, economic costs of execution, energy costs of execution, and reliability of execution, will be considered as the optimization goals. Our approaches to deal with the problem leverage various theories such as multi-objective optimization and game theory.