EP1433054A2 - Procedes et appareil de detournement de marge avec des fils dynamiques - Google Patents
Procedes et appareil de detournement de marge avec des fils dynamiquesInfo
- Publication number
- EP1433054A2 EP1433054A2 EP01939818A EP01939818A EP1433054A2 EP 1433054 A2 EP1433054 A2 EP 1433054A2 EP 01939818 A EP01939818 A EP 01939818A EP 01939818 A EP01939818 A EP 01939818A EP 1433054 A2 EP1433054 A2 EP 1433054A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- slack
- time
- tasks
- thread
- task
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
- G06F9/4887—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues involving deadlines, e.g. rate based, periodic
Definitions
- the present invention is related to the following inventions which are assigned to the same assignee as the present invention:
- the present invention is also related to the following invention which is assigned to the same assignee as the present invention and which was filed on even date herewith:
- the present invention relates generally to task scheduling within a multitasking system and, in particular, to determining available slack and allocating slack to high priority, non-critical, dynamic tasks or threads.
- Computer processes are often subdivided into a variety of functions which may be executed as tasks in serial and/or parallel fashion. These computer processes can be used to gather and act upon information, and to bring about some result in response to the information. These functional task systems find use in a variety of important environments. Examples may include monitor and control of an industrial process, such as a power generation and distribution system, or monitor and control of complex equipment, such as an aircraft or spacecraft.
- execution of tasks can include both periodic tasks and aperiodic tasks.
- One known way of executing periodic tasks is to use rate monotonic scheduling (RMS).
- RMS rate monotonic scheduling
- the classical RMS formulation is for strictly periodic task sets. To specify a periodic task set, each of n tasks, say Xj, where 1 ⁇ i ⁇ n, is associated with a period T and a worst-case compute time C ⁇ . Each task ⁇ ⁇ will be dispatched and executed at the rate 1/ T, and in worst-case it will consume processor time equal to C . at each dispatch.
- Each task is implicitly assigned a priority that is determined by its rate (or equivalently, the inverse of its period), with priority equal to the ranking ofthe rate.
- the RMS scheduler schedules periodic tasks having hard deadlines. Viable real-time systems must also be capable of executing aperiodic tasks, which can have hard deadlines or soft deadlines. It is desirable for a task scheduling system to schedule a mixture of periodic and aperiodic tasks in such as way that all periodic task deadlines are met and the response times for the aperiodic tasks are as small as possible.
- aperiodic tasks are scheduled in such as way that all periodic task deadlines are met and the response times for the aperiodic tasks are as small as possible.
- the slack-stealing algorithm as described above was limited to only a static set of execution threads, i.e. a fixed set of recurring tasks without any new periodic tasks being activated and without any periodic tasks being deactivated.
- actual real-time processing environments typically contain a dynamic set of threads, as certain periodic tasks become active and others become inactive.
- the invention deals with task scheduling for a set of periodic and aperiodic tasks, in an environment wherein threads are dynamic.
- the invention provides a multitasking system executing real-time harmonic and dynamic tasks that can request activation or deactivation at any time (their actual activation or deactivation occurs at their next period boundary).
- the invention further provides a method of scheduling tasks that includes assigning priority levels to tasks, and determining available slack for tasks at each priority level, taking into account tasks that are activating and inactivating. The method further includes allocating slack to tasks in order of priority.
- the invention provides a multitasking system that includes a processor and a plurality of tasks operating on the processor. Each task of the plurality of tasks is of a task type selected from the group consisting of periodic and aperiodic tasks.
- the system further includes an executive in communication with the processor and controlling dispatching of tasks on the processor.
- the executive includes a first module that determines available slack, taking into account periodic tasks that are activating and deactivating at unpredictable times, and the executive further includes a second module that allocates available slack to aperiodic tasks.
- the multitasking system is a flight control system.
- the invention provides a machine-readable medium having instructions stored thereon capable of causing a processor to carry out a method.
- the method includes assigning priority levels to tasks, determining available slack for tasks at each priority level taking into account tasks that are activating and inactivating, and allocating slack to tasks in order of priority.
- FIG. 1 is a schematic of a flight control system for use in accordance with an embodiment ofthe invention
- FIG. 2 is a block diagram of a multitask system in accordance with an embodiment ofthe invention
- FIG. 3 is a task execution timeline illustrating classic rate monotonic scheduling of three tasks in the prior art
- FIG. 4 is a task execution timeline illustrating unclaimed available/unused slack
- FIG. 5 is a task execution timeline illustrating timeline slack, reclaimed slack, and idle time
- FIG. 6 is a task execution timeline illustrating periodic execution times when there are continual requests for slack consumption by aperiodic tasks at four different slack levels;
- FIG. 7 is a task execution timeline illustrating periodic execution times when there is a continual request for slack consumption at a first level
- FIG. 8 is a task execution timeline illustrating periodic execution times when there is an initial request for slack consumption at a second level for more than 10 units of slack in the interval [15,16];
- FIG. 9 illustrates a group of periodic task timelines
- FIG. 10 is a task execution timeline illustrating a worst-case timeline
- FIG. 11 is a task execution timeline illustrating slack reclamation from unused compute times
- FIG. 12 is a task execution timeline illustrating a conceptual model for dynamic thread activations
- FIG. 13 is a task execution timeline illustrating an observed thread activation with no slack requests
- FIG. 14 is a task execution timeline illustrating a thread activation with slack requests, in accordance with an embodiment ofthe invention
- FIG. 15 is a task execution timeline illustrating task execution without slack stealing
- FIG. 16 is a task execution timeline illustrating task execution using both reclaimed and timeline slack
- FIG. 17 is a task execution timeline illustrating task execution using reclaimed slack
- FIGS. 18A and 18B together are a process flowchart encompassing various methods of task scheduling that employ slack stealing in a system executing harmonic and dynamic tasks, in accordance with an embodiment of the invention.
- FIG. 19 depicts a block diagram of a processor coupled to a machine- readable medium.
- Section 2 a Slack Scheduling Background is provided as a basis for understanding the present invention.
- Section 3 Task Scheduling in Digital Engine Operating System (DEOS) is described. While the present invention is not limited to any particular implementation, it is useful in understanding it to describe one implementation in considerable depth.
- DEOS Digital Engine Operating System
- Section 1 Exemplary Operating Environment
- FIG. 1 is a schematic of a flight control system 1 for use in accordance with an embodiment ofthe invention.
- Flight control computer 3 executes tasks at some periodic rate.
- Flight control computer 3 receives sensor data 6 from sensor 2, performs computations with sensor data 6 and, optionally, with state data 5 computed in a previous dispatch as inputs, and generates an output 7 to an actuator 4.
- sensor data 6 from sensor 2
- state data 5 computed in a previous dispatch as inputs
- an actuator 4 While disclosed in the context of a flight control system, the present invention can be used with many types of data processing systems, such as an industrial control systems, vehicular control systems, communications processors, real-time control systems, general purpose data processing systems, and other types of data processing system whose functional requirements are satisfied by the present invention
- Flight control computer 3 executes applications on top of a multi-threading operating system.
- the operating system is the Digital Engine Operating System (DEOS) developed by Honeywell, Inc., Minneapolis, Minnesota, U.S.A.
- DEOS Digital Engine Operating System
- a multi-threaded operating system serves as a "traffic cop” that allows a plurality of computer programs to execute concurrently without interfering with one another.
- DEOS provides preemptive fixed priority scheduling of periodic and aperiodic tasks. For periodic tasks, priorities are assigned inversely with period or deadline, so that tasks with shorter periods or deadlines have higher scheduling priorities. Aperiodic tasks are also assigned a rate or period that determines the slack request level, but the priorities of aperiodic tasks are dynamic when they are actually running.
- slack scheduling is provided for a set of harmonic and dynamic tasks or threads.
- a harmonic task set comprises a plurality of tasks or threads whose periods are static.
- a “static” task set is defined as one that comprises tasks that are scheduled at start-up time and that persists over time until the computer is shut down.
- a “dynamic” task or thread is defined as a task that can unpredictably start up or finish and that is not scheduled at start-up time. Once executing, a dynamic task can be periodic and predictable, and it can be scheduled in a fashion similar to a static task.
- Dynamic threads include periodic tasks that can be activated or inactivated at their next period boundary. Dynamic threads also include aperiodic tasks. Dynamic threads can also include slack consumers that dynamically (i.e. at any time) request slack.
- slack-stealing is provided in a multitasking system executing real-time harmonic and dynamic tasks having various priority levels. Slack is stolen from both timeline and reclaimed slack to enable the execution of high priority non-essential tasks on a best efforts basis.
- the slack available for stealing equals timeline slack plus reclaimed slack minus idle time. Counts ofthe amount of slack consumed, slack reclaimed, and periodic compute time consumed are maintained by individual priority level and dynamically updated at certain times. Idle time is calculated by priority level. Available slack is calculated, and slack is allocated and consumed by rate, with the highest rate first and the lowest rate last.
- Multitask system 10 comprises at least one processor 11 and at least one memory 12.
- Memory 12 can be implemented as any one or more memory units which can include random access memory (RAM), read only memory (ROM), cache memory, hard disk, removable memory such as diskettes, flash memory card, optical compact disk (CD), or the like.
- RAM random access memory
- ROM read only memory
- CD optical compact disk
- the particular architectural implementation is not limited and can be designed in accordance with the system's requirements. Thus, some portions of memory 12 may reside within processor 11, such as cache memory and/or RAM.
- Memory 12 comprises a plurality of tasks 14-17 of which tasks 14-16 are schedulable application tasks.
- Executive task 17 is responsible for carrying out a number of supervisory tasks, including the scheduling of application tasks 14-17.
- Each application task 14-17 is repeatedly dispatched, either at some fixed rate for periodic tasks or in response to some event, for example a software-generated interrupt or other event, for aperiodic tasks.
- RMS rate monotonic scheduling
- task dispatches i.e., when a task is placed in a prioritized ready queue
- deadlines i.e., some system-defined deadline or other constraint for completion ofthe task
- task start time i.e., when computing ofthe task begins
- complete times i.e., when computing ofthe task is complete
- Tasks are characterized using four primary parameters. The class of a task is either periodic, i.e., regularly scheduled for dispatch, or aperiodic, i.e., dispatched in response to some non-scheduled event.
- the period T, of a task is the interval between dispatches of a periodic task, or the minimum interval between event arrivals for an aperiodic task.
- the compute time of a task is the upper bound on the amount of processor time required for an instance of that task to complete after each dispatch. In practice, the degree of assurance that the actual compute time will not exceed this value varies depending on the task.
- the criticality of a task in one embodiment is an integer value used to control scheduling behavior when processors are overloaded, i.e., where some subset of tasks is unschedulable. While such a numerical ranking system is convenient for implementation, other ranking systems may be utilized.
- the schedulability of a task is affected only by tasks on the same processor having a criticality equal or greater to its own criticality. Lower criticality tasks may exceed their stated compute times, or, for aperiodic tasks, may be dispatched at a higher rate than their stated periods, without causing a higher criticality task to miss a deadline.
- FIG. 3 is a task execution timeline illustrating classic rate monotonic scheduling of three tasks ⁇ ⁇ , x 2 , and ⁇ 3 in the prior art.
- the "hyperperiod", or least common multiple of periods T ⁇ , T 2 , and T is 30 in this example.
- Each task is implicitly assigned a priority that is determined by its rate (or equivalently, the inverse of its period), with priority equal to the ranking ofthe rate. In the example illustrated in FIG.
- the rate monotonic scheduler is a fixed-priority preemptive scheduler that simply services the task with the highest priority in the ready queue.
- RMS also has an accompanying analysis that can be used to check whether the schedule is feasible. It is characterized quite simply by what is known as the exact schedulability criteria which is given by
- Equation (1) ⁇ ⁇ 1 l ⁇ i ⁇ n t S, t T ⁇
- Equation 1 applies to non-harmonic task sets as well as harmonic task sets.
- Equation 1 applies to non-harmonic task sets as well as harmonic task sets.
- the task set whose timeline execution is illustrated in FIG. 3 is defined in
- Si is the set of all scheduling points for % ⁇ that we need not consider beyond to determine scheduling feasibility.
- W,(t) be the amount of level i work dispatched in the interval [0,t], where the level i work is the amount of work requested by tasks with priority greater or equal to i (i.e. those tasks with priority 1, 2, ..., i). Then
- Equation 1 in our example asks if min (h ⁇ ,h ,h 3 ) ⁇ min (1/6, 3/7, 9/21) ⁇ _ 1, which it is.
- Equation 1 simplifies notably, which allows for rapid on-line acceptance tests.
- Many extensions to make RMS practical in real systems have been developed within the past decade. Among them are allowing for intertask communication and specification of task criticalities (which can defy the implicit rate structure defined by the periods). Worst-case bounds for intertask communication have been developed for a strictly periodic task set. However, effectively modeling communication and executive execution times for mixed periodic and aperiodic task sets remains an area of research in the general case.
- Classical slack scheduling assumes a fixed set of periodic tasks. Using this set of tasks, a slack table that describes the slack inherent in the timeline is statically computed and stored for use at run-time. At run-time, a set of accumulators keeps track ofthe amount of time actually consumed by periodic processes (and also the amount of slack reclaimed from them), and the amount of time consumed by aperiodics tasks (where the idle task can be thought of as a special aperiodic task). These values are then added/subtracted to the appropriate amount of precalculated slack to reflect what really happened. Subsequent sections define these operations more precisely.
- FIG. 4 is a task execution timeline illustrating unclaimed available/unused slack.
- a process has a worst-case execution time equal to C.
- the process begins executing and consumes C(t) execution time.
- the process halts execution.
- Timeline slack is equal to the period T minus the worst-case execution time C. In this example, 0 timeline slack was reclaimed, and the processor was idle 0 time.
- FIG. 5 is a task execution timeline illustrating timeline slack, reclaimed slack, and idle time.
- a process has a worst-case execution time equal to C.
- the process begins executing and consumes C(f) execution time until it halts execution at ti. Assume that the process does not resume during this period T.
- the time between ti and t 2 represents processor idle time.
- Timeline slack is equal to the period T minus the worst-case execution time C (at t 3 ).
- Reclaimed slack is the amount of slack that can be reclaimed due to the fact that the process halted execution before its worst-case execution time C. In this example, reclaimed slack equals C - C(f) (i.e. t 3 - ti).
- FIG. 6 is a task execution timeline illustrating periodic execution times when there are continual requests for slack consumption by aperiodic tasks at four different slack levels.
- FIG. 6 illustrates slack available at four different slack levels, Level 1 being at the top, and Level 4 being at the bottom.
- the hyperperiod is 42.
- the periodic task set properties are summarized in Table 1 (Periodic Task Set) above.
- Slack allocated at level i means that the executions of dispatches of ⁇ ,, ⁇ l+ ⁇ , ..., ⁇ n will be delayed as long as possible to allow the task requesting slack to execute.
- Slack at level n+1 is equivalent to background processing.
- the TimelineSlack, j values are computed off-line and stored in a matrix which is loaded as a part of the run-time executive.
- the (TimelineSlack, j ) matrix for the timeline slack example are shown in Table 4 (TimelineSlack ⁇ ) (Appendix A).
- the (TimelineSlack, 0 ) matrix is also referred to as the accumulated slack matrix.
- the slack table is not static and must be efficiently recomputed at task activation and deactivation.
- Timeline slack can be stolen from tasks having "guaranteed" hard deadlines and made available to high priority non-essential tasks.
- execution of hard deadline tasks is delayed as long as possible without causing them to miss their deadlines. Even though they may be delayed, higher priority hard deadline tasks are still executed before lower priority hard deadline tasks.
- FIG. 7 is a task execution timeline illustrating periodic execution times when there is a continual request for slack consumption at a first level.
- the "hyperperiod”, i.e. the least common multiple of periods Ti , T 2 , and T 3 is 30 in this example.
- the slack table values for the periodic task set of FIG. 7 are provided in the static TimelineSlack, 0 table shown in Table 5 (Appendix A).
- Task ⁇ i requires only 1 time unit (worst-case) to execute and thus provides 4 time units of timeline slack between each dispatch.
- FIG. 8 is a task execution timeline illustrating periodic execution times when there is an initial request for slack consumption at level 2 for more than 10 units of slack in the interval [15,16].
- the periodic task set for FIG. 8 is identical with that in FIG. 7.
- FIG. 9 illustrates a group of periodic task timelines.
- the periodic task set for FIG. 9 is identical with that in FIG. 3.
- the top line 50 in FIG. 9 is identical to that of FIG. 3.
- Level 1 represents a periodic task timeline for tasks of priority 1.
- Level 2 represents a periodic task timeline for tasks of priority 1 and 2.
- Level 3 represents a periodic task timeline for tasks of priority 1-3.
- each timeline shown in FIG. 9 comprises alternating level i busy (B) and idle (I) intervals. Every level i busy period B(i) is contained in a level (i + 1) busy period B(i + 1). Every level (i +l)idle period I(i + 1) is contained in a level i idle period I(i).
- the timeline repeats every hyperperiod, so idle and slack accumulators can be reset.
- slack can be reclaimed from tasks that complete in less than their worst-case execution times. Reclaimed slack can be stolen from tasks having "guaranteed" hard deadlines and made available to high priority non-essential tasks. Reclaimed slack can only be stolen after the execution of hard deadline tasks.
- FIG. 10 is a task execution timeline illustrating a worst-case timeline.
- the hyperperiod is 30.
- FIG. 10 illustrates the case where both tasks consume their worst-case execution times. Only 4 time units are not used.
- FIG. 11 is a task execution timeline illustrating slack reclamation from unused compute times. Using the identical periodic task set as described above for FIG. 10, FIG. 11 illustrates the case where task Ti uses less than its worst-case execution time, so that the first dispatch of ⁇ 2 completes execution at time 8 rather than time 15 as in FIG. 10.
- This section defines slack scheduling calculations made at run-time. There are two basic operations: update slack variables and calculate available slack.
- Algorithm (1) updates the idle slack variables used when computing slack availability. Update_time equals the worst-case time to execute this routine, a constant (possibly based on i). Algorithm (1) is called when a periodic task (which has not overrun its execution time allotment) completes. In theory, tasks can overrun their execution time allotment, and fhe slack algorithms still work correctly, perhaps producing negative slack values.
- Algorithm (2) updates the aperiodic slack variables used when computing slack availability. It is called whenever an aperiodic task completes, which might include surplus compute time for a periodic task increment, or when the idle task completes, or when a newly arriving request for slack occurs. Update_time equals the worst-case time to execute this routine, a constant (possibly dependent on i).
- the aperiodic task t may execute over several time increments; i.e., it may be scheduled, consume all slack, suspend itself, be rescheduled when more slack is available, etc.
- an aperiodic task is one ofthe following three types of tasks: an interrupt service routine (ISR) thread, the idle process, or any periodic task that did not complete in its original time allotment and that requested slack to complete.
- ISR interrupt service routine
- the remainder ofthe incomplete periodic task is called a task increment.
- Pseudo-code for calculating slack available is provided by Algorithm (3)
- Threads that access binary semaphores (mutexes) in DEOS use a protocol known as the Highest Locker Protocol (HLP), which assigns to any thread locking a monitor the highest priority of any thread that can lock that monitor. The assignment is made at the time the lock is granted and held until the thread releases the lock.
- HLP Highest Locker Protocol
- slack is granted only on a "best-effort" basis. No guarantees are made about the slack already granted to a thread. It could be taken away (by a higher priority slack-consuming thread) at any time the thread is not executing a critical section or is not at the head of a resource queue.
- Each thread may be viewed as having a life cycle which is represented as a sequence of one or more ofthe following states: NotCreated, Dormant, CompleteForPeriod, Ready, WaitingForPredecessor, Running, ActivelyWaitingForResource, or PassivelyWaitingForResource.
- states NotCreated, Dormant, CompleteForPeriod, Ready, WaitingForPredecessor, Running, ActivelyWaitingForResource, or PassivelyWaitingForResource.
- a resource is an event, a semaphore, or a mutex.
- a thread For ease of exposition, we define a thread to be Active if it is either in state Ready, Running, WaitingForPredecessor or ActivelyWaitingForResource.
- the active status for the waiting states will depend on the final definition ofthe waiting states. In particular, a thread that is CompleteForPeriod or is PassivelyWaitingForResource is not active and its unused compute time can be reclaimed as slack.
- Each thread has a Boolean value Slack indicating whether the thread, when enabled will make requests for slack or not.
- Slack indicating whether the thread, when enabled will make requests for slack or not.
- slack consumers a distinction between ExecutingOnSlack, and not executing on slack is made for more efficient budget accounting of overheads.
- ISR threads There are two types of threads in DEOS, periodic and Interrupt Service Routine (ISR). Periodic threads can be either static or dynamic, the former being named and allowed to participate in schedule_before relationships. ISR threads can be thought of as being "dispatched" by an interrupt. ISR threads are also sometimes called aperiodic threads and are always dynamic. Both ISR and periodic threads may request slack in addition to their fixed budget, provided their budget has been specified as slack-consuming in addition to a fixed budget (i.e. Slack is true) and slack functions are specified as "on”.
- the slack scheduler may support on-line acceptance tests for slack allocation to promote fairness, especially to lower priority slack requesters.
- Both periodic and ISR threads have rates associated with them.
- the inverse of a thread's rate is its period.
- the set of rates at which a thread can be selected are fixed at cold start.
- Static periodic threads have rates that cannot change between activations.
- Dynamic threads can be assigned any ofthe available rates upon creation.
- ISR threads are always assigned fhe highest rate, and the priorities ofthe ISR threads are unique, and higher than all other user defined threads in the process.
- Both periodic and ISR threads can access mutexes. Mutexes also have a budget, and each access to the mutex is timed. Periodic threads execute only once per period and consequently, the number of mutex accesses per period is known for periodic threads.
- the compute time of a periodic process includes the execution times ofthe mutexes. Note that this inclusion ofthe mutex execution time is in addition to the blocking terms that must also be included in the feasibility analysis.
- ISR threads also have a budget but are not restricted to execute once per period. In one embodiment, all ISR threads are scheduled at the highest rate (i.e. the shortest period). Since there is no hard limit (in practice, the hardware always imposes a hard limit, since the time to respond to an interrupt is not zero) on the number of times an ISR thread can execute per period, an accounting mechanism has been devised that effectively transfers some budget from the ISR thread doing the preemption to the thread being preempted. With minor modification, this budget transfer mechanism can be applied to slack requesters, which like ISR threads, are randomly arriving, and the number of requests per period is not known in advance. DEOS time-charging policies and their relationship to slack requests are described in Section 3.4.
- Threads can wait for resources, where a resource is either a semaphore or an event.
- a resource is either a semaphore or an event.
- a mutex is considered to be a special case of a binary semaphore with respect to waiting policies. Consequently, mutex waiting policies are not treated as a special case.
- the lock operation is to be interpreted as a wait operation, and the unlock operation is to be inte ⁇ reted as a signal operation where the maximum count for a mutex is one.
- the thread is really waiting for the semaphore to count to less than its maximum count.
- Events that lead to scheduling threads at random times in DEOS are interrupts (triggering an ISR thread) and threads waiting for an event to occur.
- Three types of waits are supported in DEOS: IndefmiteTimeout, LongDurationTimeout, and ShortDurationTimeout.
- Algorithms 4-6 are meant only to be suggestive and does not necessarily reflect notation used in the DEOS implementation. All types of timeouts are applicable to mutexes, semaphores and events. In the following algorithms, let c be the calling thread, and e (or r) be the event (resource) for which the calling thread is waiting.
- queue_time(c) and resource_time(r) require definition.
- queue_time depends on thread type (ISR, periodic) and/or thread state attribute ExecutingOnSlack. If the thread is an ISR thread or a periodic thread that is executing on slack, then queue_time(c) is set to 2*(cswap+delta) plus cachebonus. If the thread type is a periodic thread that is not executing on slack, then queue time(c) is cswap plus delta.
- resource_time(r) depends on the resource r and also on the thread c receiving access to r. If r is a mutex or semaphore, then resource_time(r) is the execution time of r plus overhead incurred for context swap. If r is an event, then r is simply the overhead for context swap. (The time to pulse an event is charged to the pulsing thread, not the waiting thread.)
- the context swap overhead is defined by queue_time(c).
- Pseudo-code for handling a long duration timeout is provided by Algorithm 5 (Appendix B). The only difference between Algorithm 5 and Algorithm 4 is keeping track ofthe number of iterations. Overhead accounting is identical.
- Pseudo-code for handling a short duration timeout is provided by Algorithm 6 (Appendix B). Note that when a thread gives up its budget to slack, no guarantees are made that the remainder of its budget can be reclaimed prior to its next period. Both ISR and periodic threads can give up portions of their budgets to slack. Unless a thread will complete for period, it must always maintain enough budget to allow it to run on slack, if resumed. Since no guarantees are made about semaphores (excluding monitors), no slack predecrements are made for the execution of code associated with semaphores. Since semaphores are not critical sections (and do not change priorities), preemptions are permitted.
- ISR threads may be supported at rates other than the highest rate. Slack stealing could then provide a mechanism for improved response time of ISR threads.
- DEOS applications comprise a collection of processes.
- a process is a collection of threads. Each process is allocated a portion ofthe total CPU bandwidth, so processes are used to create a form of time partitioning. Each process can be considered a partition. Each thread belongs to a single partition. Each partition is "guaranteed" a certain amount of time or budget.
- a process' budget equals its process utilization bound times the hype ⁇ eriod, where the hype ⁇ eriod is the least common multiple of all periods in the system. No task is allowed to exceed its execution time budget.
- Every process has a Primary_Thread that is either active or inactive. When inactive, the Primary_Thread's budget is part of timeline slack.
- Primary_Thread performs system level functions and is not defined by the application developer.
- the primary thread's budget is defined implicitly by the process' budget and all active threads in that process.
- the primary thread's budget ranges anywhere from zero to the process' maximum budget, depending on what other threads are active in the process.
- the primary thread's budget is equal to the process' budget.
- process initiation only the process' primary thread is active, and moreover it may consume a lot of execution time for initializations, etc. thereby requiring a large allotment of CPU time at least initially.
- new threads could be started, reducing the primary thread's budget allotment.
- the primary thread's budget is an explicit quantity that is decreased when threads within the process are started/created and increased when threads are stopped/deleted.
- schedulability analysis applies to each process, and the entire system is said to be schedulable only if every process is feasible.
- system feasibility does not imply that each process' schedule is feasible. It is permissible to share mutexes across partitions, and in this case the mutex budget is taken from the system budget rather than from the budgets of each ofthe processes with threads that access the mutex. This is an optimization that takes blocking times out ofthe process' feasibility tests and moves it to a system level test.
- Threads and partitions are dynamic. Activations and deactivations occur at the next period boundaries ofthe threads. On-line acceptance tests are used for admission of threads and partitions.
- Thread Services (Appendix A) identifies the DEOS Thread Services that relate to slack scheduling. These thread services will now be discussed.
- This operation applies only to static threads, although the slack updates that apply to creation of dynamic threads are provided here. If the addition of this thread still results in a feasible schedule, the dormant thread is put in state CompleteForPeriod for the remainder ofthe current period and dispatched (i.e. made ready) at the start ofthe next period. From a scheduling point of view, this service is a no-op if the thread is not in the dormant state at the time of call or if the resulting schedule with this thread included would be infeasible.
- Section 3.3 describes an on-line acceptance test that can be used to determine whether admission of a thread would result in an infeasible process schedule. If addition ofthe thread results in a feasible schedule, the thread will be started; otherwise an error condition will be returned.
- the process' PrimaryThread is active, then the budget for starting/creating a new thread comes from the primary thread's budget. Two slack updates are needed in this case. First, the reserved time allocated for the primary thread will be reduce resulting in increased timeline slack at the primary thread's slack level, and secondly a reduction in timeline slack must be made for the new thread being allocated, in that order. If the process' PrimaryThread has been stopped, then prior to starting a thread, a reduction in timeline slack must be made. Thus, less work is involved in updating slack variables when the PrimaryThread is not active. The slack variable updates needed to reflect the creation start of a thread are provided below.
- slack variable updates required for activation of a thread when the process' PrimaryThread has been stopped. Additional updates are required to decrease the primary thread's utilization if it has not been stopped. More detailed algorithms are given in Section 4 entitled Slack Scheduling in DEOS With Dynamic Threads. All of these slack variable updates are charged to the thread making the call as part ofthe execution time of create/start thread.
- an active thread's state is changed to CompleteForPeriod and it is suspended to the end ofthe period at which point it will be rescheduled.
- Slack can be reclaimed during this routine. Specifically, when the thread's state is changed to CompleteForPeriod, this is viewed as a thread completion and so the same updates take place that would occur at thread completion. More specifically, the amount of reclaimed slack is worst_case_exec_time(fhread) - actual_exec_time(thread), where thread is the thread being restarted.
- the slack accumulators are (ultimately, depending on whether updates are deferred to the completion of aggregate threads) updated as shown in Algorithm 13 (Appendix B). If updates are executed only upon the completion of an aggregate thread or when scheduling an aperiodic, the parenthesized "ultimately" clause is invoked, and the reclaimed slack will be stored locally until an aggregate update is made. A more detailed description for this can be found in Section 4 entitled Slack Scheduling in DEOS With Dynamic Threads. If the thread was executing a mutex, the mutex is unlocked by the restart routine. The execution time counters need to be adjusted to reflect this. The mutex status indicates that it was unlocked as the result of a thread restart. Updating execution times for threads that have mutexes locked appears to have no unusual affect on slack variable updates.
- DEOS only static threads are stopped. Stopped static threads can then be killed. Dynamic threads are killed, which has the effect of stopping it and then killing it. The thread's state becomes NotCreated. Only an active or dormant thread can be killed.
- Slack variable updates may be made when either a dynamic thread is killed or a static thread is stopped. No updates are made when a stopped static thread is killed. If the process' PrimaryThread is active (i.e. has not been stopped) then the budget ofthe thread being killed/stopped goes back to the primary thread's budget.
- TimelineSlackj j+ i When the budget goes to slack (i.e. when the primary thread has been stopped), for a thread with rate i with the stop/kill service executed in period j, the value of TimelineSlackj j+ i is incremented by the thread's allocated cpu time which becomes available for slack at the start ofthe next period at rate i. Just as in start/create thread, these slack updates will require new slack availability calculations for threads curcently executing solely on slack.
- the budget of the thread being killed/stopped is returned to the primary thread's budget, which also requires updating timeline slack variables.
- the timeline slack variable updates for returning a level k stopped/killed thread's budget to the slack pool at the start ofthe ⁇ k+ ⁇ st period of T are provided below.
- TimelineSlackk, ⁇ k+i is initialized to TimelineSlackk, ⁇ k+ (T - Ck, ⁇ k) at time ⁇ k • T k .
- killThread or stopThread
- the budget ofthe thread being killed is cpu_time, which is available once per every T k units of time. This deallocation policy applies to all hard-deadline (periodic and ISR) threads.
- the remainder of a thread's unused budget can also be reclaimed for use in this period, if desired. This applies to both hard-deadline and best-effort threads.
- the reclaimable slack is worst_case_exec_time(thread) - actual_exec_time(thread).
- the reclaimed slack is remaining slack(thread), which is equal to the slack granted at time of request minus the slack used (or equivalently the execution time) since time of slack request.
- the killed thread's budget becomes available for use by others at the start ofthe next period defined by T k . This is in analogy to the start thread operation. If the thread being killed/stopped is holding a mutex lock, it is unlocked by this service call. The mutex status now indicates that it was unlocked as the result of a kill/stop thread call.
- the thread being killed/stopped is waiting for an event, it is removed from the linked list for that event. If the thread has no successors and is actively waiting on an event, the unused portion ofthe thread's budget for this period can be reclaimed.
- the calling thread is suspended until the start of its next period.
- This procedure can be called by either a thread suspending itself, or by the kernel when a budget is about to expire. If a thread suspends itself, then the call gets charged to the primary thread. But this may have zero budget (if it was stopped). Or in the case of a budget expiration, the kernel would do the detection and initiate the call, hence receive the charge since it initiated the suspension.
- Unused compute time in this period can be reclaimed as slack.
- fhe reclaimed slack is worst_case_exec_time(fhread) - actual_exec_time(thread), where the worst-case execution time of a thread is (in most cases) the thread's budget.
- ISR threads may be different, in the sense that they can execute many times per period. If slack is to be reclaimed for ISR threads, then a formal specification of a worst-case execution time is needed (if different than thread budget), and the slack reclaimed is then obtained using the above calculation. Depending on when the slack accumulators are updated, this information may be maintained locally until the aggregate thread updates, or a call to Algorithm 13 (Appendix B) may be made immediately. 3.2.7 Restart Process
- the process' primary thread needs a constraint vector so that the process utilization budget and the process' primary thread's cpu_time are distinguished. The latter is the worst-case amount of execution that the process's primary thread takes, which can vary over time. Initially, the cpu time might equal the utilization budget when the primary_fhread is performing process initializations. Later, it may be reduced to zero or something in between. 3.2.8 Create Mutex
- Mutex blocking times are computed as a system level parameter, not on a per process basis.
- the protocol used for mutex access is known as the highest locker protocol, which is a modified version ofthe well known priority ceiling protocol.
- the highest locker protocol works as follows. If a thread t wishes to access a mutex M, then when access to M is granted, t (temporarily) inherits the highest (numerically lowest) priority of any thread that has access privileges to M.
- max_lock_time is input and defines the maximum time the mutex can be locked per execution by any ofthe threads that can access it.
- priority_ceiling_period which identifies the period ofthe thread with the highest (numerically lowest) priority that can access the mutex. If a thread accesses a set of mutexes, say Mi, M 2 , ..., M where M j is accessed v, times, then the thread's worst- case execution time has added to it
- the component of utilization that may be introduced as a blocking time at level priority_ceiling(mutex) is max_lock_time(M) / priority_ceiling_period(M).
- the feasibility analysis which includes blocking times at the system level, is provided in Section 3.3.
- the calling thread is given opportunity to wait until the mutex is unlocked (i.e. the caller can set the wait_okay parameter to true). If the calling thread waits, the time queued while waiting for a lower priority process to execute the mutex counts as blocking time for waiting threads. (Technically, from an implementation point of view, it is a preemption since the lower priority thread executing the mutex had inherited the mutex's priority ceiling, which is at least as great as the waiting thread.) Waiting for higher priority threads to execute is a preemption. Neither preemption times nor blocking times count towards the thread's accumulated execution time; however, both are considered in the feasibility analysis.
- All threads can lock a mutex. For those threads with guaranteed hard deadlines, no checks on slack availability are necessary prior to locking a mutex. For "best-effort" threads sufficient slack must have been allocated to complete the execution ofthe mutex prior to granting access. This is most easily tested by looking at the residual time slice for execution until the requesting thread will be suspended. If it is smaller than the worst-case execution time for the mutex, the lock will not be granted and the thread (assuming it does not continue an alternative path not requiring access to a mutex) would be wise to suspend itself. However, this policy works only if another service called by a higher priority thread that requires the use of slack is not allowed to begin before the mutex execution is complete.
- the thread's residual time slice is first decremented by the worst- case mutex execution time. Then the mutex is executed, and the thread's execution time is decreased when the mutex execution was less than its worst-case execution time. No slack variable updates are needed here.
- the status ofthe mutex is set to indicate that it was reset. If the mutex was locked, the lock is released and the locker's execution time is adjusted to reflect the time spent executing the mutex. If the locker reset the mutex, the execution time of this service is added to the locker's execution time. No slack variables are updated during this operation.
- either the indefinite or long timeout waits will be passive in the sense that the thread suspends itself after an initial query of an unpulsed event, and then continues to poll the event periodically for either an indefinite (for the indefinite timeout) or at most a prespecified (for the long duration timeout) number of times. All polling, beyond the first, takes place at the beginning ofthe period. No time is billed against the threads budget when suspended, and in fact the unused compute time can be reclaimed as slack for that period. If the thread has no successors, then the wait is active for either the indefinite or timed timeout waits. Time spent actively waiting is charged against the thread's CPU budget. The active thread is added to the event's linked list.
- the event is pulsed prior to the budget remaining for the thread's period, it is placed in the ready queue. If the event has not been pulsed prior to the exhaustion ofthe thread's budget, the thread is complete for period and is suspended until the start of next period. In this case, local slack updates (if the thread is a part of an aggregate) are necessary. The thread has consumed its worst-case execution time so no slack can be reclaimed. Active waiting resumes at the start of either an indefinite (for an indefinite timeout) or at most a specified (for a timed timeout) number of periods. If while actively waiting, the event is pulsed, the thread is removed from the event's linked list (as are all threads waiting) and placed in the ready queue. No slack updates are required at this time, although the thread's execution time must reflect the time spent actively waiting.
- Thread execution time and slack variable updates occurred at the time of suspension and are not needed at the time of dispatch.
- An immediate timeout has no waiting other than that accounted for by preemption.
- a thread with an immediate timeout suspends itself and changes state to CompleteForPeriod if the event is not pulsed when it reaches the head ofthe ready queue. Slack can be reclaimed at this time.
- DEOS defines quantities in terms of one of two levels: process and system.
- a process is a set of threads and defines a time partition for all threads in the process. Each process has a utilization bound and, ignoring blocking times, all active threads within a process must be feasible.
- the worst-case compute time of a thread includes the time it takes to do two context swaps (one at the start and one at the finish) and the time it takes to execute all mutexes accessed (one or more times) by the thread.
- Blocking times are maintained at the system level. By pulling blocking times up to the system level, some overall optimizations can be made. This timing optimization may lead to a potential reduction in fault tolerance for hardware failures. In particular, when encountering a recoverable timing or memory fault while executing a monitor shared by multiple processes, it is difficult to argue the error is contained within a single process. It may be the case that all faults are fully non-recoverable.
- Each process has a set of Constraint vectors, and each vector set contains one vector for every rate.
- C p ⁇ (T P;1 , C p ,,)
- i l,...,n ⁇ , where n is the number of rates supported by DEOS.
- T p, , T, is the period ofthe i th rate supported by DEOS, and C p ,, refers to the aggregate thread budget in process p.
- process p has as threads that run at rate i, t p ,,, ⁇ , t p , ⁇ j2 , ..., t p , 1>np , ⁇ with worst-case compute times c p ,,, ⁇ , c p ,,,2, ..., c p ,,, np , ⁇ respectively, then
- the primary thread is not explicitly declared. Unless the primary thread is stopped, its compute time, or equivalently budget, is determined by all other active threads in the process.
- Each process p also contains a utilization bound U p .
- Non-ISR threads are periodic threads that are regularly dispatched and that execute exactly once per period.
- c P)1J be the budget of thread t PjlJ where the budget should allow for two cswap times and the execution times of all mutexes that C p , 10 might access during any execution. This will produce pessimistic results, but it does allow for worst-case context swap overheads.
- process p is said to be feasible if
- ⁇ be the aggregate thread at level i over all processes.
- ⁇ p ⁇ d is a part of ⁇ , 's execution for each process p and each rate i's index j.
- S, ⁇ M j
- ⁇ accesses M j as a part of its execution ⁇ .
- ⁇ be the union of all semaphores that can block the execution of ⁇ ,.
- ⁇ , ( j> , S j ) n S,.
- ⁇ n ⁇ (empty set).
- B worst-case blocking time for ⁇
- B is the worst-case execution time of all semaphores that can be executed by any of ⁇ , + ⁇ ,..., ⁇ n and that are also executed by ⁇ ,.
- B, max ⁇ Ck I M k € ⁇ ,.
- ISR threads in DEOS pose additional challenges since they are not treated strictly as periodics. Every ISR thread does have a period and a (per period) budget. However, an ISR thread may occur as often as it likes in any period, and will receive service as long as it has not exceeded its budget. In theory, there is no restriction on the interarrival times (although the minimum time to latch an interrupt imposes the equivalent of a minimum interarrival time) for ISR threads. The only requirement is that not more than the CPU budget will be given to the sum of all arrivals in a period.
- an ISR thread has budget of 10 ms every 40 ms period and each execution takes at most 300 ⁇ sec, then about 33 « 10 / 0.3 ISR threads could execute in a 40ms period.
- the ISR thread need not execute at the priority implicitly defined by its period, but it can execute at up to the highest priority.
- the execution ofthe ISR thread is modeled as a blocking time for higher priority levels and as preemption at lower priority levels. If the ISR thread will execute at priority P e and is scheduled (via rate) at priority P r , then the ISR thread's execution time will be included as a blocking time for all threads with priority P such that P e ⁇ P ⁇ P r . In addition, it will be scheduled using the usual feasibility analysis so that preemption times for lower priority threads will be accurately captured.
- ⁇ Pj be an ISR thread with budget c PJ and period T j and suppose that ⁇ PJ will be executed at priority i where 1 ⁇ i ⁇ j.
- C j is the budget that includes all context swap times and all critical section compute times in addition to the usual compute time over all executions in a period. For example, if the per dispatch execution time of ⁇ PJ is cpu_time, which includes mutex execution time but not context swap time, and oc Pj is executed a maximum of k times in any time interval of duration T J; then k(2cswap_time + cpu_time) ⁇ C j if k executions are to supported within a period of
- the above can be extended to multiple ISR threads at the same priority and for ISR threads specified at the same priority as periodic threads.
- Slack stealing can be used to provide ISR threads with the priority of service they want.
- the intuitive reason is that an ISR thread that runs at a priority higher than indicated by its rate has been included as a blocking time at the higher rates.
- the blocking time analysis implies that sufficient slack will be available at those rates, and these should cause no additional overhead.
- Aperiodic threads may choose to try and consume slack rather than wait for their priority to become the highest (assuming they are not assigned the highest rate). The reason for using slack is to attempt achieving a more rapid response time. If sufficient slack is available such that the entire ISR thread can run to completion without having to suspend itself, then it can execute. Upon the completion of its execution, the thread may want to give its worst-case execution time (which is 300 ⁇ secs in our example) back to slack so that other threads could consume it. Alternatively, the thread might choose to keep the entire remainder of its budget and potentially sneak in another execution. Slack can be used for either pu ⁇ ose.
- DEOS assigns as much ofthe overhead time to tasks as is possible.
- One reason for this objective is to reduce variability in charges accrued by DEOS. For example, thread creation and destruction is a dynamic process, hence variable in the number of occurrences, and thus they are charged to the activating and deactivating threads rather than DEOS. As a rule, the thread incurring the overhead is charged for the time.
- a primary and continual source of overhead is context swap times.
- Each task execution (not each task budget) is charged for two context swap times, one that occurs when the task is started and one when the task completes. If a task is preempted, it is the preempting task that incurs the context swap charges on either end.
- ISR threads can execute many times per period, and each execution is charged. Presumably the application developer is provided with the time to execute a cswap when estimating compute times, or equivalently when specifying budgets.
- Locking, executing, and unlocking mutexes also introduces overheads, some of which are billed to the execution time ofthe mutex and some of which are billed to the requesting releasing threads.
- the lock and unlock execution times are billed to the thread attempting to either lock or unlock the mutex.
- the mutex execution is charged two cswap times, one at initiation and one at completion. These times are included as a part of a mutex's worst-case execution time (note that budget and execution time are not the same for ISR threads).
- the process has no budget of its own any longer. If the PrimaryThread is active, its budget is defined as all available time not allocated to other threads in the process. If the PrimaryThread is stopped, the remaining process budget is assigned to slack, and the PrimaryThread's budget is zero. Or, the PrimaryThread's budget can be something in between, effectively lowering ofthe process' utilization bound U p for an unspecified period of time.
- the DEOS monitors execution times and enforces timeouts. The monitoring can be done by simply setting a timer to the worst-case execution time, and if the timer expires, then a timing fault has occurred where DEOS (not the violating thread or monitor) executes the fault handler.
- Every mutex has a monitor process.
- the monitor process has some time allocation associated with it. When the mutex is locked, the locking time is subtracted from the monitor process time.
- system level slack scheduling is supported, so we begin with the advantages of system level slack availability. Then we list some potential disadvantages. It is not difficult to transform the advantages of system level slack to the disadvantages of process level slack, and similarly, the disadvantages of system level slack into the advantages of process level slack.
- Slack sharing can occur among processes and is not confined to a single process. Both reclaimed and timeline slack can be shared among processes, allowing a thread in high need of bandwidth to make use of all available slack in the DEOS system.
- the "clock" could resume at the start ofthe most recently recorded highest rate period. For example, if the highest rate period is Ti, then if somewhere between k Ti and (k+l)T ⁇ a power failure is detected, the time of restart would be "declared” to be k Ti .
- Process Restarts can occur at any time. Technically this was covered under thread start and stop although it is worth repeating here since process restarts are often thought of as a response to abnormal situations. If a process restart is executed, and the PrimaryThread is active, then all other threads in the process are stopped killed, and their budget goes back to the PrimaryThread. When the PrimaryThread is stopped, it is started. In both cases, timeline slack variables are decreased for killed stopped threads, and they are increased for the PrimaryThread. (3) Process Creation and Destruction can occur at any time.
- the present invention relates to a slack scheduling implementation in a realtime operating system (RTOS) employing dynamic threads of execution that has application, for example, in controlling an aeronautical system such as an aircraft or spacecraft.
- RTOS realtime operating system
- the invention forms part of a real-time operating system used in an aeronautical control system developed by Honeywell, Inc., Minneapolis, Minnesota, which is known as Digital Engine Operating System (DEOS).
- DEOS Digital Engine Operating System
- the invention is not limited to implementation in DEOS or in a RTOS, and it can be utilized in any type of data processing system, although it has particular utility in a real-time control system, such as an industrial, residential, commercial, military, or other type of control system.
- Section 4 is organized as follows:
- Section 4.1 describes the impacts on slack calculations resulting from the differences in assumptions between "active task sets" in DEOS and in a classical RMS system. Due to the dynamism, assumptions about “future” idle time availability essentially disappear, and slack availability updates become much more frequent at all slack levels (specifically, at the highest rate of any primary thread in the system).
- Section 4.2 describes "timeline slack” in the context of dynamic threads and process activations and deactivations, and how to compute it.
- the algorithms that maintain, track, and update the thread and process budgets used for time partitioning in real-time operating systems with dynamic threads and process (de)activations define critical quantities used when calculating accurate values for available (dynamic) timeline slack.
- Section 4.3 provides algorithms for keeping track of reclaimed slack availability and all slack consumption using slack accumulators. As discussed earlier, reclaimed slack is any time "given up" from a worst-case budget by completing for period early. Section 4.4 describes two different approaches for slack allocation. The differences lie in deferring slack availability calculations for slow rate slack consumers when higher rate slack consumers subsequently become present.
- Table 8 For convenience, a large table of notation is found in Table 8. Much ofthe notation in Table 8 will have been previously defined in context.
- Dynamic "period timeline” slack resulting from dynamic thread activations and deactivations i.e. the "timeline slack” is dynamically varying, unlike in known systems supporting slack stealing, and DEOS' only sources of "timeline” slack are the budgets of inactive primary threads, unallocated system level slack, and possibly reclaimed slack from application threads that CompleteForPeriod almost immediately);
- Slack variable initialization and hype ⁇ eriod reset routines initialize and periodically reset the slack variables. These routines are described in Sections 4.2.2 and 4.2.3. When allowing dynamic allocation of threads, the period resets are more complex than in the case where only static threads are present.
- Slack variable accumulator update routines are executed in response to one of several events listed in Section 4.3. Slack variables in this category keep track of reclaimed slack, idle time consumed, and slack time consumed. 4. Slack availability calculations must be made whenever a thread
- processors It is not uncommon for processors to have a worst-case utilization not far from 100% (and sometimes more), so the assumption of only reclaimed slack is applicable in practice. Cunently, DEOS' estimated worst-case utilization is about 88%. It also admits a slack model that supports randomly occurring activation and deactivation requests effective next period for threads and processes. It does not effectively support design to time tasks, and it will be less effective (but not necessarily ineffective) at supporting incremental tasks since available slack produced by the higher rate threads is not available to threads at the lower rate, a priori (at least not a priori one short period).
- At least two sources of period timeline slack are (i) system time not allocated to processes and (ii) primary thread budgets when declared inactive (i.e. the sum over the ⁇ k 's in Z). In one embodiment, these two sources occur at level 1, and are made available at times jT 1 ⁇ j e ⁇ 0,1, 2, ... ⁇ . These definitions extend easily to processes with primary threads at slower rates than Ti. Many of those details are included in the algorithms presented herein.
- a possible third source might be ISR and periodic threads that CompleteForPeriod nearly immediately (e.g. when polling for a resource and not getting it). Since there is no way of knowing how soon a thread will complete for period, the implementation will likely be simpler if no special attempts are made to reclaim user thread slack at the start of each period. Just as hype ⁇ eriod timeline slack persists only through the hype ⁇ eriod in the original model, period timeline slack is guaranteed to persist at the level at which it was allocated only to the next period. However, under certain conditions we will see that it is possible to carry forward period timeline slack. Note that the notion of period timeline slack is largely a conceptual model and can be implemented using the slack reclamation mechanism.
- TimelineSlack (t) is used to denote the amount of period timeline slack that has been made available to (only) level i slack requestors since the start ofthe last period defined by T,. It is important to note that the value of TimelineSlack ! is not cumulative since the start ofthe hype ⁇ eriod and applies only to slack produced at rate i, both of which are departures from the original definition.
- TimelineSlack 0 for i e ⁇ 2, ... , n ⁇ , since all primary threads have rate 1.
- ⁇ Usys is a single value since all primary threads all have the same rate.
- provision may be required for different processes having their primary threads declared at different rates.
- Some embodiments may also require support for partitioning of slack among applications (perhaps for better fairness guarantees). This would require introducing a process index to many ofthe slack data structures.
- TimelineSlack The algorithms that can lead to an update in the value of TimelineSlack, are requests for user thread (de)activation (Algorithm 10), primary thread (de)activation (Algorithm 9), and process (de)activation (Algorithm 11). Discussion of each of these algorithms follows in subsequent sections.
- the practical storage needed to maintain period timeline slack table and aggregate compute time information is O(n ) when all tasks are harmonic. Specifically, O(n 2 ) storage is required to store the (n, [ ,) matrix and also the matrix ( ⁇ TimelineSlack, j ) that keeps track ofthe change to occur in timeline slack at level i when start of periods T j and T, next coincide, j > i.
- ⁇ TimelineSlack can be represented as an n- vector, since all processes' primary threads have period T).
- FIG. 12 is a task execution timeline illustrating a conceptual model for dynamic thread activations.
- FIG. 12 represents a theorized time scheduling model in a system employing time partitioning. There are no slack requests from aperiodic tasks in FIG. 12.
- Partition Pi is depicted by low blocks in FIG. 12, and partition P 2 is depicted by medium blocks.
- Each partition has two rates Ri and R 2 .
- Rate R ⁇ is represented by cross-hatching; rate R 2 is represented by stipling.
- each thread is identified by a data structure ofthe form t(P,R,T), wherein "P” is a partition id, “R” is a rate id, and "T” is a thread id.
- the single down arrows in FIG. 12 indicate the times when thread 3 of partition Pi is executed.
- an R 2 thread is activated (thread 3) in partition P 2 .
- Thread 3 of partition P 2 is thus allocated 2 ms every 20 ms by the scheduler.
- the double down anows in FIG. 12 indicate the times when thread 3 of partition P 2 is executed.
- A 6 between times 4 and 10
- A 5 between times 15 and 20
- A 4 between times 26 and 30
- A 4 between times 36 and 40.
- no reclaimed slack is available in FIG. 12, because each thread is taking its worst-case execution time.
- FIG. 13 is a task execution timeline illustrating an observed thread activation with no slack requests.
- FIG. 13 represents an actual timeline with no slack requests from aperiodic tasks. Threads are executed with priority by rate, so long as a partition's time budget has not expired. It will be observed that the actual timeline of FIG. 13 differs from the conceptual model of FIG. 12, in that in FIG. 12 tasks are distributed fairly uniformly throughout the hype ⁇ eriod, whereas in FIG. 13 the highest priority task that is ready executes first.
- the conceptual model of timeline slack of FIG. 12 cannot make assumptions about the future, and so it must assume that tasks are uniformly distributed throughout the hype ⁇ eriod; however, when a task is ready, it will run, so the observed timeline in FIG. 13 differs from the conceptual model of timeline slack of FIG. 12.
- the single down arrows in FIG. 13 indicate the times when thread 3 of partition Pi is executed. It will be observed that thread (1,2,2) executes in 2 consecutive time units (i.e., between time 2 and 4, and between time 23 and 25) in FIG. 13, whereas this thread executed in four separate one-unit intervals in FIG. 12.
- the double down arrows in FIG. 13 indicate the times when thread 3 of partition P 2 is executed. It will be observed that thread (2,2,2) executes in 2 consecutive time units (i.e., between time 4 and 6, and between time 25 and 27) in FIG. 13, whereas this thread executed in two separate two-unit intervals in FIG. 12. Also, thread (2,2,3) executes in 2 consecutive time units (i.e., between time 27 and 29) in FIG. 13, whereas this thread executed in two separate one-unit intervals in FIG. 12. The amounts of available slack A and unused slack U are indicated above the timeline.
- FIG. 14 is a task execution timeline illustrating a thread activation with slack requests, in accordance with an embodiment ofthe invention.
- the aperiodic slack requests are depicted by boxes 61-65 in FIG. 14.
- a slack request 64 at time t 32 for 3 ms of time at priority 1, denoted by SA(32,1).
- a slack request 65 at time t 37.5 for 2.5 ms of time at priority 2, denoted by SA(37.5,2).
- SA(37.5,2) Also contained within the aperiodic boxes is information concerning the available slack, unused slack, reclaimed slack, and slack requested.
- Time partitioning is important in providing time fault management. No thread in one partition Pi can cause a time fault in another partition P 2 . Instead, the time fault is contained within Pi .
- Time partitioning also provides reduced certification time and cost. Partitions are classified by "criticality". A highly critical partition is more costly to certify. With time partitioning, changes to tasks in a lower criticality partition do not require recertification ofthe higher criticality partition. In DEOS, there are two types of “budget” sharing that can take place. It is important to understand that budget sharing via slack does not violate time partitioning in any way.
- budgets are shared within a process in the sense that budgets for newly activated threads (say, at level i > h) must come from the primary thread's budget, and budgets for recently deactivated threads are returned to the primary thread's budget.
- system slack for P comes only from user threads that complete for period having consumed less than their worst-case execution time. Thus slack is never used for thread activation, since its lifetime is limited to the period in which it was reclaimed.
- pooling of reclaimed slack at a system level might initially raise questions about whether time partitioning is preserved. It is not difficult to see that time partitioning is preserved when (i) a process' utilization is still used to determine feasibility of thread activations belonging to that process and (ii) slack is not preallocated beyond the next period for which it might be consumed by a thread activation.
- the admission decision to activate a thread t into a process P is based solely on the availability of the P's primary budget. Whether P's primary budget is active or inactive, the same feasibility test is used for user threads. The requests to activate/deactivate a process' primary thread is again based on the same feasibility test, except the amount of budget (de)allocated (from) to the primary thread is all remaining budget not already allocated to active threads in P.
- P's primary thread's rate is h, h ⁇ j.
- TimelineSlack h will have been decreased by and also that the amount of slack made available beginning at time s will be decreased to all level k e ⁇ h, ... , n ⁇ slack consumers. This follows from the decrement of ⁇ TimelineSlack hj by an amount - ⁇ j / , and consequently beginning at time s, TimelineSlack h is decreased by ⁇ TimelineSlackh j .
- Feasibility for a new process is then based on the amount of unallocated bandwidth at the system level, and the same budget transfer mechanism is used to transfer system budget to a new process as is used to transfer a process' budget to a new thread.
- An exactly analogous argument can be used to show time partitioning is preserved in the presence of process creations/deletions as well.
- the present invention enables the execution of non-critical functions that would otherwise be infeasible, such as debugging, monitoring, networking, and so forth, and/or it enables the enhancement of essential tasks, such as increasing display rates.
- the present invention supports more robust time fault management. Time faults in one partition might be recoverable if slack has been reclaimed from another time partition.
- FIG. 15 is a task execution timeline illustrating task execution without slack stealing.
- Partitions Pi and P 2 are defined as critical.
- Partition P 3 is non-essential.
- Pi uses 10 ms of time;
- P 2 uses 10 ms;
- P 3 uses a minimum of 5 ms and a maximum of 20 ms.
- the hype ⁇ eriod is 30 ms.
- 5 ms of time is unallocated timeline slack and can only be used by any background task, absent the use of a common slack pool for use by all tasks, independent of task partition.
- FIG. 15 illustrates that without pooled slack stealing, only background tasks can share unclaimed slack.
- FIG. 16 is a task execution timeline illustrating task execution using both reclaimed and timeline slack.
- a common slack pool is provided for use by all tasks, independent of task partition. Partitions Pi through P are as described above for FIG. 15. However, in FIG. 16, Pi completes 4 ms early.
- P 2 begins 4 ms early and overruns at time 21, using reclaimed slack from the slack pool that was released by Pi.
- P 3 uses unallocated timeline slack from the slack pool.
- FIG. 16 illustrates an example of fault recovery through the use of slack stealing from a common pool that is available for all tasks, independent of task partition.
- FIG. 17 is a task execution timeline illustrating task execution using reclaimed slack.
- a common slack pool is provided for use by all tasks, independent of task partition. Partitions P t through P 3 are as described above for FIG. 15.
- FIG. 17 demonstrates how a high priority, but non-essential partition such as P3 can use both timeline slack and reclaimed slack from a common pool of slack. Rather than maintain a slack pool that is independent of time partitions, the present invention could be implemented by maintaining a slack pool for each time partition.
- Algorithm 7 Pseudo-code for handling system initialization of slack variables is provided by Algorithm 7 (Appendix B). Algorithm 7 is called once before the start ofthe initial hype ⁇ eriod. Failure to initialize slack variables at this time will result in bogus values later. This algorithm requires modifications when primary thread periods can be other than Ti .
- a level k period reset is performed at times j • T k , where j is any natural number.
- the inner loop setting ⁇ TimelineSlack is a no-op when no activations or deactivations have occuned at level k in the last period of duration T , 1 ⁇ k ⁇ j prior to execution of this routine.
- a deactivation of a process with no user threads is equivalent to killing a process.
- an activation of a primary thread budget with no user threads is equivalent to creating a process in terms of changes in timeline slack.
- a true process create/delete requires updating the registry and status ofthe processes and doing a feasibility analysis, whereas a (de)activation request for a primary thread is always granted on the basis of feasibility.
- a call to Algorithm 9 is to toggle the activation status. In practice, an explicit request would likely occur, so a check for a no-op would be present initially.
- the time to deactivate a thread is not bounded, but in other embodiments it is. In the above-described embodiment, we have assumed that a thread deactivation request becomes effective at the next period the thread would have run.
- Algorithm 10 (Appendix B). Algorithm 10 is executed at the time the activation/deactivation is being processed/granted, and not at the next T j period boundary. It is at the next T j period boundary that reclaimed timeline slack is updated.
- Thread deactivation is not conditional on the basis of feasibility. A thread in an inappropriate state may not be deactivated, but that is not a scheduling issue and so is not discussed here.
- a process activation refers to a process creation, not a primary thread activation.
- a newly created process has no active user thread, and we assume the primary thread is active. (An inactive primary thread is actually a simpler case.)
- a process deactivation refers to a process deletion, not a primary thread deactivation.
- a process cannot be deleted if any of its user threads are active. If the primary thread is active, it is first deactivated by calling PrimaryThread(De)activation.
- Algorithm 11 calls Algorithm 9 to make any necessary transfer of budgets.
- t is the time of request, and r is the process' rate, P.Rate.
- P.Rate the process' rate
- Section 4.2 is devoted to describing slack updates associated with period boundary transitions.
- the detailed algorithm describing which variables get updated was given earlier as Algorithm 8.
- This section emphasizes the differences between the improved slack stealing scheduling algorithm and the known slack stealing algorithm which required no period updates (beyond the hype ⁇ eriod reset).
- the root ofthe difference stems from the fact that with dynamic thread/process creation, knowledge ofthe future cannot be assumed, hence available timeline slack at level i can only be calculated for a period, at the start of the period ⁇ ,.
- Algorithm 12 updates the reclaimed slack variables used when computing slack availability. It is called whenever a task executing on fixed budget completes for period. The same algorithm applies whether doing incremental or aggregate updates.
- each of the remaining full periods may have R,/( B + 1) units of time allocated to slack requests.
- R is the amount of unused worst-case fixed budget to be reclaimed from ⁇ rent then the amount of slack that can be given to level j requestors (j ⁇ i) in the cunent period defined by T, is the minimum of R, and the difference between the next start of period T j and the current time plus any remaining unused ISR budgets, i.e. min(R,,( ⁇ 1 (t)+l)T J - (t B/ (t)).
- Idle accumulator slack variables (Idle,, 1 ⁇ i ⁇ n) are updated when transitioning from an idle processor to a busy processor. This update requires knowledge of how long the processor spent idle.
- the TimeSlice attribute is used to indicate the amount of time allocated to a thread before an intermediate timeout expires. Alternative sets are possible, of course.
- the attribute ExecTime provides a mechanism for determining the reclaimed slack. Specifically, at the completion of a periodic thread (within its period), the amount of slack reclaimed from that thread is ComputeTime - ExecTime.
- the idle thread may need a non-zero budget, because it does things like thread/process deletes, some portions of which are executed critical sections (so there may be some minimal fixed budget).
- the idle process' execution time updates at the clock tick, so idle time will never be greater than T ⁇ . Relaxing this assumption is not entirely straightforward and introduces additional, perhaps non-trivial bookkeeping. A case statement of ranges in where the idle time consumed is necessary for threads that can wait for resources while holding their budgets. The difficulty arises when a thread waits for a resource, does not release its budget, sufficient idle time elapses to chisel into the reserved budget, then the thread decides to give its entire budget to slack (after some of it has been consumed by idle).
- zero fixed budget slack-consuming threads are not supported. Such a class of threads would not be allowed to access resources or otherwise introduce overheads associated with blocking. Consequently, every slack- consuming thread must run at some rate and have some non-zero fixed budget. The implications of this are that no slack-consuming threads can run at priority n+1, or equivalently in background mode. In other embodiments, priorities can be introduced among slack-consuming threads at a particular rate.
- Slack variable updates are always necessary prior to computing slack availability to ensure that the calculated value for available slack is not optimistic.
- Slack variable updates after consuming slack is analogous to slack variable updates upon thread completion, and they might be defenable until either a new request for slack arrives, or a downward rate transition occurs (i.e. DEOS transitions from executing a higher rate thread to a lower rate thread).
- Pseudo-code to update the aperiodic slack variables used when computing slack availability is provided by Algorithm 14 (Appendix B).
- the aperiodic task t may execute over several time increments; i.e., it may be scheduled, consume all slack, suspend itself, be rescheduled when more slack is available, etc.
- Algorithm 14 contains pseudo-code appropriate for updating the aperiodic accumulators, and it is O(n). It makes reference to a slack data structure shown in Table 11 (Slack Record Attributes) (Appendix A). The slack record keeps track of how much slack has been reclaimed at each level (R,), and how much level i only unused slack can be carried forward (U,). Some additional variables are used in candidate algorithms to reduce the number of required updates. In Algorithm 14 timeline slack has be decomposed into amounts available at each particular rate, where a requestor at rate j can consume all available slack at rates k, 1 ⁇ j ⁇ k.
- level i slack is consumed before level i+1 slack since it expires first.
- the algorithm simply starts at level 1, increments AperiodicTimei by the minimum of slack consumed and slack available at level 1 prior to slack consumption, decrements the amount of slack consumed, and continues up through the slower rate slack levels until all slack has been included in the AperiodicTime j 's. Note that all attributable slack will be consumed.
- ISR periodic or ISR as to ISR threads executing on fixed budget: "The scheduler shall never permit level i threads ExecutingOnSlack without at least
- the slack accumulators In addition to requiring a minimum amount of available slack, the slack accumulators must be predecremented, should the requesting thread be preempted by another slack consumer. This is accomplished by calling
- this invariant holds whenever scheduling a slack-consuming thread, not just prior to a waiting for a resource request by a slack-consuming thread.
- feasibility is checked prior to initial scheduling, and ordinary resumption of threads executing on slack in addition to any requests to wait for resources.
- contextSwitchPlusDelta covers the overhead costs ofthe needed context switch necessary to begin executing the requesting thread.
- Another contextSwitchPlusDelta covers the costs ofthe unpredictable context switch necessary to resume the preempted thread.
- the CacheBonus allows for a worst-case rewrite ofthe cache when resuming the preempted thread. (In fact, CacheBonus and contextSwitchPlusDelta are added to the remaining budget ofthe thread being preempted.) Plus, an additional SlackVarUpdateTime is required which is consumed when predecrementing the slack accumulators.
- CacheBonus + contextSwitchPlusDelta is transfened to the budget ofthe thread being preempted (just like it is when an ISR thread operating on fixed- budget preempts a thread).
- Threads that can both execute on slack and either lock mutexes and/or wait for resources must have a minimum fixed budget equal to the resource locking time. For the case of mutexes, this is worst-case execution time for the (nested) mutex. For other resources, it is just the overhead needed to cover the cost of context swaps (i.e. 2*contextSwitchPlusDelta + CacheBonus).
- the reason for requiring a minimal fixed budget is that a TBE (time budget exceeded) exception does not lower the priority ofthe thread holding a mutex, and predecremented slack will have expired when the thread becomes active the next period.
- slack reclaimed at level i may not available for consumption at level j, where j ⁇ i , although it may be in other embodiments.
- This algorithm calculates the slack available beginning at the time ofthe call, say s and ending at the ends of periods defined by (( ⁇ ! (s) +l)T ⁇ , ( ⁇ 2 (s)+l) T 2 ,..., ( ⁇ remind (s)
- ⁇ and cacheBonus are selected based on system overheads beyond cswaps.
- UpdateAperiodicSlackVariables should be called prior to execution of this routine, when necessary. UpdateldleS lack Variables will have automatically been called prior to execution of this routine.
- slack availability computations we consider more broadly the question of slack allocation policies that may impact performance and or fairness (in DEOS or in other embodiments). We list two possible optimizations here, and then we describe them at the end of this section.
- Reclaimed slack from threads completing for period at level i should not be allocated to levels 1, ... , i-l. From an implementation view, this is clearly easiest to implement and also offers the advantage of providing a form of fairness among rates. Reclaimed slack from level i is available only to threads at level i , ... , n.
- Updates can also be defened when more reclaimed slack becomes available at level j when executing a slack consumer at level i, j ⁇ i.
- Algorithm 15 is called to compute the available slack.
- a row vector of length n is returned containing the slack available at each level.
- all idle accumulators and aperiodic accumulators must have been updated. Idle accumulators are automatically updated when transitioning from idle to busy (which would have occuned prior to making a slack availability calculation).
- the aperiodic accumulators need not be updated (for example, a slack consumer preempting another slack consumer). Consequently, Algorithm 14 must have been executed prior to calling AvailableSlack in Algorithm 15.
- the slack accumulators are predecremented by an amount sufficient to allow for context swaps and caching effects. This entails one more call to the code in Algorithm 14.
- level j slack will become available at the next period of T j , which is before the next period of Tk.
- the level k slack consumer will be preempted to favor the level j slack consumer (if slack priorities as defined by rate are observed). Repeated situations like this might lead to a lot of slack accumulator updates, and potentially a lot of overhead. Providing guaranteed slack would save on some ofthe updates, since higher priority slack consumers would then be viewed as a preemption, but requests for later slack would not have to be made by the preempted slack consumer.
- Time synchronization due to clock drift It is assumed that we only move forward in time, and that we know the old time from which we are jumping. Time synchronization points are always at period boundaries. 2. Process/thread restarts.
- FIGS. 18A and 18B together are a process flowchart encompassing various methods of task scheduling that employ slack stealing in a system executing harmonic and dynamic tasks, in accordance with an embodiment ofthe invention.
- FIGS. 18A-B are a process flowchart having action boxes 101, 103, 105, 107, 109, 111, 113, 115, and 117.
- a multitasking system schedules tasks according to a rate monotonic algorithm. As part ofthe scheduling, priority levels are assigned to various tasks.
- available slack is determined for tasks at each priority level, taking into account tasks that are activating and/or inactivating at unpredictable times.
- Boxes 107 through 111 provide various implementations to determining available slack.
- a static table is maintained for timeline slack. The slack table is recalculated at task activation and task deactivation.
- accumulators are maintained to keep track of slack-related parameters including slack consumed, reclaimed slack, and idle time. Based on the contents ofthe accumulators, slack consumed, reclaimed slack, and idle time can all be calculated.
- the accumulators are updated upon the occunence of an event from the following group:
- the accumulators are predecremented to allow for overhead associated with allocating slack, such as context swaps, cache refreshes, and so forth.
- slack is allocated to tasks according to their priority, with highest priority tasks being allocated slack first. As a result, an aperiodic high priority task can steal slack from a periodic low priority task without impacting the periodic task's hard execution deadline.
- the methods end.
- FIG. 19 depicts a block diagram of a processor 130 coupled to a machine- readable medium 132.
- Processor 130 may be further coupled to bus 134 for communication with other processors and/or with other components of a data processing system.
- Machine-readable medium 132 may include fixed devices coupled to processor 130, such as an internal storage medium or programmable memory device.
- Machine-readable medium 132 may further include removable devices coupled to processor 130, such as a removable storage medium or programming cartridge.
- Machine-readable medium 132 contains instructions stored thereon, in machine-readable format, capable of causing processor 130 to carry out the methods described herein.
- the present invention provides both apparatus and methods for a task scheduler that can provide slack-stealing in a real-time environment that encompasses periodic and dynamic tasks.
- Computer systems that can perform as described herein are commercially desirable in many types of real-time control systems, including but not limited to aerospace flight control and industrial process control systems.
- the various embodiments ofthe invention can be used to define electronic systems to carry out the task scheduling activities of multitasking systems.
- the electronic systems described make use of a variety of electronic equipment having one or more processors utilizing instructions in machine-readable form to carry out the methods described herein.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
Abstract
L'invention concerne un système multitâche exécutant des tâches harmoniques et dynamiques en temps réel présentant divers niveaux de priorités, dans ledit système une marge est détournée d'une ligne temporelle et d'une marge récupérée pour permettre l'exécution de tâches non essentielles de haute priorité sur une base des meilleurs efforts. Des calculs de la quantité de marge absorbée, de marge récupérée et de temps de calcul périodique sont maintenus par un niveau de priorité individuel et mis à jour de manière dynamique à certains moments. Le temps mort est calculé par le niveau de priorités. Une marge disponible est calculée, et une marge est attribuée et absorbée par taux, le taux le plus élevé se présentant en premier et le taux le plus bas en dernier. Une marge est rendue disponible aux tâches dans plus d'une partition. Toutes les marges appartiennent à un groupement de marges commun à l'échelle du système. Cette invention concerne également un système informatique et diverses méthodes qui permettent de réaliser la programmation des marges dans un système divisé temporellement.
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US09/751,955 US7140022B2 (en) | 2000-06-02 | 2000-12-29 | Method and apparatus for slack stealing with dynamic threads |
| US751955 | 2000-12-29 | ||
| PCT/US2001/017738 WO2002054237A2 (fr) | 2000-12-29 | 2001-06-01 | Procedes et appareil de detournement de marge avec des fils dynamiques |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| EP1433054A2 true EP1433054A2 (fr) | 2004-06-30 |
Family
ID=25024230
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP01939818A Withdrawn EP1433054A2 (fr) | 2000-12-29 | 2001-06-01 | Procedes et appareil de detournement de marge avec des fils dynamiques |
Country Status (3)
| Country | Link |
|---|---|
| EP (1) | EP1433054A2 (fr) |
| JP (1) | JP2004533667A (fr) |
| WO (1) | WO2002054237A2 (fr) |
Families Citing this family (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP2219112A3 (fr) * | 2009-01-23 | 2012-08-15 | Imec | Procédé et système de fonctionnement en temps réel dur |
| CN101710292B (zh) * | 2009-12-21 | 2013-03-27 | 中国人民解放军信息工程大学 | 一种可重构任务处理系统、调度器及任务调度方法 |
| JP5689239B2 (ja) | 2010-02-03 | 2015-03-25 | 昭和シェル石油株式会社 | ガソリンエンジンおよびディーゼルエンジン油 |
| US10768984B2 (en) | 2015-06-11 | 2020-09-08 | Honeywell International Inc. | Systems and methods for scheduling tasks using sliding time windows |
| US9465664B1 (en) * | 2015-09-09 | 2016-10-11 | Honeywell International Inc. | Systems and methods for allocation of environmentally regulated slack |
-
2001
- 2001-06-01 EP EP01939818A patent/EP1433054A2/fr not_active Withdrawn
- 2001-06-01 WO PCT/US2001/017738 patent/WO2002054237A2/fr not_active Ceased
- 2001-06-01 JP JP2002554867A patent/JP2004533667A/ja not_active Withdrawn
Non-Patent Citations (1)
| Title |
|---|
| See references of WO02054237A3 * |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2002054237A3 (fr) | 2004-04-01 |
| WO2002054237A2 (fr) | 2002-07-11 |
| JP2004533667A (ja) | 2004-11-04 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US7302685B2 (en) | Methods and apparatus for sharing slack in a time-partitioned system | |
| US7140022B2 (en) | Method and apparatus for slack stealing with dynamic threads | |
| Burns | Scheduling hard real-time systems: a review | |
| US7207042B2 (en) | System and method for robust time partitioning of tasks in a real-time computing environment | |
| McCann et al. | A dynamic processor allocation policy for multiprogrammed shared-memory multiprocessors | |
| Gu et al. | Dynamic budget management with service guarantees for mixed-criticality systems | |
| Fidge | Real-time schedulability tests for preemptive multitasking | |
| Gu et al. | Dynamic budget management and budget reclamation for mixed-criticality systems | |
| Medina et al. | Generalized mixed-criticality static scheduling for periodic directed acyclic graphs on multi-core processors | |
| Jones et al. | {CPU} Reservations and Time Constraints: Implementation Experience on Windows {NT} | |
| WO2002054238A2 (fr) | Procedes et appareil de partage d'ecart dans un systeme de repartition de temps | |
| Sigrist et al. | Mixed-criticality runtime mechanisms and evaluation on multicores | |
| EP1433054A2 (fr) | Procedes et appareil de detournement de marge avec des fils dynamiques | |
| Binns | A robust high-performance time partitioning algorithm: the digital engine operating system (DEOS) approach | |
| Melani et al. | A scheduling framework for handling integrated modular avionic systems on multicore platforms | |
| Wellings et al. | Cost enforcement and deadline monitoring in the real-time specification for Java | |
| Faggioli et al. | Design and implementation of a posix compliant sporadic server for the Linux kernel | |
| van den Heuvel et al. | Transparent synchronization protocols for compositional real-time systems | |
| Schwan et al. | Multiprocessor real-time threads | |
| WO2004019205A2 (fr) | Systeme et procede destines a une repartition robuste des taches dans le temps a l'interieur d'un environnement de calcul en temps reel | |
| Zhang et al. | End-to-end scheduling strategies for aperiodic tasks in middleware | |
| Kim et al. | Mixed-criticality on multicore (MC2): A status report | |
| Kakkar | Scheduling techniques for operating systems for medical and IoT devices: A review | |
| Corbett | Constructing abstract models of concurrent real-time software | |
| Johnston | Strengthening scheduling guarantees of seL4 MCS with IPC budget limits |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
| 17P | Request for examination filed |
Effective date: 20021126 |
|
| AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR |
|
| 17Q | First examination report despatched |
Effective date: 20070924 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
| 18D | Application deemed to be withdrawn |
Effective date: 20100101 |