Menu
For free
Registration
home  /  Adjustment/ How to learn to correctly determine deadlines for completing work - experts answer. Planning (computing) - Scheduling (computing) technical director of the center for innovative technologies and solutions "Jet Infosystems"

Experts answer how to learn to correctly determine deadlines for completing work. Planning (computing) - Scheduling (computing) technical director of the center for innovative technologies and solutions "Jet Infosystems"

A switched version of the previous algorithm is the shortest remaining execution time algorithm. According to this algorithm, the scheduler selects the process with the shortest remaining execution time each time. In this case, it is also necessary to know the task completion time in advance. When a new task arrives, its total execution time is compared with the remaining execution time of the current task. If the execution time of the new task is shorter, the current process is suspended and control is transferred to the new task. This scheme allows you to quickly service short requests.

Three-level planning

Batch processing systems allow three-level scheduling, as shown in the figure. As new tasks arrive into the system, they are first placed in a queue stored on disk. Inlet access scheduler selects a task and transfers it to the system. The remaining tasks remain in the queue.

As soon as a job enters the system, a corresponding process will be created for it, and it can immediately begin to compete for access to the processor. However, it is possible that there are too many processes and they all do not fit in memory, then some of them will be paged out to disk. The second level of scheduling determines which processes can be stored in memory and which can be stored on disk. This is what he does memory scheduler .

The memory scheduler periodically looks at processes on disk to decide which ones to move into memory. Among the criteria used by the scheduler are the following:

1. How long has it been since the process was swapped to disk or loaded from disk?

2. How long has the process been using the CPU?

3. What is the size of the process (small processes do not interfere)?

4. What is the importance of the process?

The third scheduling level is responsible for allowing processes in the ready state to access the processor. When we talk about a “scheduler,” we usually mean CPU scheduler . This scheduler uses any algorithm suitable for the situation, both with and without interruption. We have already looked at some of these algorithms, and we will get acquainted with others later.

Planning in interactive systems.

Cyclic planning.

One of the oldest, simplest, fairest and most frequently used is the cyclic scheduling algorithm. Each process is given a certain amount of processor time, the so-called time slice. If the process is still running at the end of the time slice, it is terminated and control is transferred to another process. Of course, if the process blocks or terminates early, a control transition occurs at this point. The implementation of round-robin scheduling is simple. The scheduler only needs to maintain a list of processes in a ready state. When a process has reached its time limit, it is sent to the end of the list.

The only interesting aspect of this algorithm is the length of the quantum. Switching from one process to another takes some time - it is necessary to save and load registers and memory maps, update tables and lists, save and reload the memory cache, etc. The conclusion can be formulated as follows: too small a quantum will lead to frequent switching of processes and a small efficiency, but too large a quantum can result in slow response to short interactive requests. A quantum value of around 2 0 -5 0 ms is often a reasonable compromise.

Priority planning.

Round robin scheduling has an important assumption that all processes are equal. In the situation of a computer with a large number of users, this may not be the case. For example, at a university, deans should be served first, then professors, secretaries, cleaners, and only then students. The need to take into account such external factors leads to priority planning. The basic idea is simple: each process is assigned a priority, and control is transferred to the ready process with the highest priority.

Several queues.

One of the first priority schedulers was implemented in the CTSS system (compatible time-shared system). The main problem with the CTSS system was that process switching was too slow, since the IBM 7094 computer could only hold one process in memory. Each switch meant offloading the current process to disk

and reading the new process from disk. The developers of CTSS quickly realized that efficiency would be greater if processor-limited processes were given a larger time slice than if they were given small time slices, but often. On the one hand, this will reduce the number of transfers from memory to disk, and on the other hand, it will lead to a deterioration in response time, as we have already seen.

As a result, a solution with priority classes was developed. Processes in the highest priority class were allocated one quantum, processes in the next class were allocated two quantum, processes in the next class were allocated four quantum, etc. When a process had used all of its allotted time, it was moved to a lower class.

As an example, consider a process that needs to compute over 100 quanta. First, it will be given one quantum, then it will be pumped to disk. Next time he gets 2 quanta, then 4, 8,16, 32, 64, although out of 64 he only uses 37. In this case, only 7 transfers (including the initial load) will be needed instead of the 100 that would be needed using the round-robin algorithm. In addition, as it gets deeper into the priority queue, the process will start less and less often, giving the CPU to shorter processes.

“The shortest process is the next one”

Since the Shortest Task First algorithm minimizes the average turnaround time in batch processing systems, one would like to use it in interactive systems as well. To a certain extent this is possible. Interactive processes most often follow the pattern of "waiting for a command, executing a command, waiting for a command, executing a command..." If you treat the execution of each command as a separate task, you can minimize the overall average response time by running the shortest task first. The only problem is

is to understand which of the waiting processes is the shortest.

One method is based on estimating process length based on the previous behavior of the process. In this case, the process with the shortest estimated time is launched. Let's assume that the expected execution time of the command is T 0 and the expected next execution time is T 1 . It is possible to improve the time estimate by taking the weighted sum of these times aT 0 + (1 - a)T 1 . By choosing the appropriate value for a, we can make the estimation algorithm quickly forget about previous runs or, conversely, remember them for a long time. Taking a = 1/2, we get a series of estimates:

T 0, T 0/2 + T 1/2, T 0/4 + T 1/4 + T 2/2, T 0/8 + T 1/8 + T 2/4 + T 3/2.

After three runs, the weight of T 0 in the estimate will decrease to 1/8.

The method of estimating the next value in a series through a weighted average of the previous value and the previous estimate is often called aging. This method is applicable in many situations where estimation from previous values ​​is necessary. The easiest way to implement aging is at a = 1/2. At every step you just need

add a new value to the current estimate and divide the sum in half (shifting to the right by 1 bit).

Guaranteed planning.

A fundamentally different approach to planning is to make real promises to users and then deliver on them. Here's one promise that's easy to say and easy to keep: if you share a processor with n users, you will be given 1/n of the processor's power.

And in a system with one user and n processors running, each will get 1/n processor cycles.

To fulfill this promise, the system must keep track of the CPU allocation between processes from the moment each process is created. The system then calculates the amount of CPU resources the process is entitled to, such as time since creation divided by n. Now we can calculate the ratio of the time given to the process to the time to which it is entitled. The resulting value of 0.5 means that the process received only half of its allotted amount, and 2.0 means that the process received twice as much as it was supposed to. Then the process with the smallest ratio is started, until

it will not become larger than that of its nearest neighbor.

Lottery planning.

The algorithm is based on distributing lottery tickets to processes for access to various resources, including the processor. When the planner needs to make a decision, a lottery ticket is selected at random and its owner gets access to the resource. In terms of CPU access, the "lottery" can happen 50 times per second, with the winner getting 20ms of CPU time.

More important processes can be given additional tickets to increase the likelihood of winning. If there are only 100 tickets and 20 of them are in one process, then it will get 20% of the processor time. Unlike the priority scheduler, in which it is very difficult to evaluate what, say, priority 40 means, in lottery scheduling everything is obvious. Each process will receive a percentage of resources approximately equal to the percentage of tickets it has.

Lottery planning has several interesting properties. For example, if during creation a process receives several tickets, then in the next lottery its chances of winning are proportional to the number of tickets.

Communicating processes can exchange tickets if necessary. So, if a client process sends a message to a server process and then blocks, it can pass all of its tickets to the server process to increase the chance of the server starting. When the server process finishes, it can return all the tickets back.

Fair planning.

So far we have assumed that each process is controlled independently of who its owner is. Therefore, if user 1 creates 9 processes, and user 2 - 1 process, then using round-robin scheduling or in the case of equal priorities, user 1 will get 90% of the processor, and user 2 only 10.

To avoid such situations, some systems pay attention to the owner of the process before scheduling. In this model, each user gets a certain share of the processor, and the scheduler selects a process according to this fact. If in our example each user had

promised 50% of the processor, then they will get 50% of the processor, regardless of the number of processes.

Planning in real-time systems.

In real-time systems, time plays an essential role. Most often, one or more external physical devices generate input signals, and the computer must respond adequately to them within a given period of time.

Real-time systems are divided into hard real-time systems , which means the presence of strict deadlines for each task (they must be met), and flexible real-time systems , in which violations of the time schedule are undesirable, but acceptable. In both cases, the program is divided into several processes, each of which is predictable. These processes are most often short and complete their work within a second. When an external signal appears, it is the planner who must ensure that the schedule is maintained.

External events to which the system must respond can be divided into periodic(occurring at regular intervals) and non-periodic(occurring unpredictably). There may be several periodic streams of events that the system must process. Depending on the time it takes to process each event, the system may not be able to process all events in a timely manner.


Related information.


Often, developers, especially inexperienced ones, get confused when asked to set deadlines for completing tasks. However, the ability to plan is a very useful and necessary skill that helps not only in work, but also in life. We decided to ask the experts how to learn how to plan correctly and deliver projects on time.

Brief conclusions can be found at the end of the article.

A developer usually needs to take into account several parameters at once to estimate the time it takes to complete a task:

  1. Experience in performing such tasks and working with this technology stack. If you have to do something fundamentally new, you need to be especially careful with your assessment.
  2. Experience working with this client. Knowing the customer, you can roughly predict some additional requirements and the scope of changes.
  3. The quality of the code you will be working with. This is the most influential factor, because of which everything can take a long time and generally not go according to plan. If the project has tests, there are only explicit dependencies everywhere and the functionality is well isolated, everything is not so scary. It's much worse if you're dealing with legacy code without tests or with code overloaded with implicit dependencies. Things like “magic functions” (when it’s hard to see the final call stack from the code) and code duplication (when you need to edit several independent sections to change some functionality) can also complicate matters.

To learn how to adequately estimate work deadlines, you need to constantly practice. At the beginning of my work, I did exactly this: I estimated the time to complete any incoming task, even if no one required it, and then looked at how accurately I managed to get into my estimate. While completing the task, he noted which actions took longer. If something significantly increased the period, I remembered this moment and took it into account in the next assessments.

To an objective assessment of the time needed purely for work, a small margin should be added to cover force majeure situations. It is often assessed as a percentage of the completion of the main task, but it is different for everyone: some add 20% of the time, some - 10%, and some - 50%.

It is also useful to analyze the reasons for missed deadlines after each serious deadline violation. If you lack qualifications, you need to work on your weak points. If the problem was organizational, understand what prevented it from working normally.

Promote Demote

, technical director of the center for innovative technologies and solutions "Jet Infosystems"

A large number of articles are devoted to methods for assessing the labor intensity of a project, including the duration of work and individual tasks. However, this still causes conflicts both within the project team and when communicating with the customer.

The main assistant in assessment is experience. Try to somehow compare the new task with the ones already done. If you're doing a report, look at how long a similar report took in the past. If you're doing something new, try breaking it down into known parts and evaluating them. If the task is completely new, set aside time to study (even better, coordinate this time with the person setting the task).

Pay attention to the accompanying stages - if you need to develop a service, then the assessment must also include unit testing (and maybe not only unit testing), the preparation of test data will take some time. You should consider integration with other services, etc. Allow time for correcting defects that you find yourself or with the help of testers. A lot of time can be wasted on “invisible” tasks. For example, there is an assessment for development and there is an assessment for testing, but the transfer of an artifact for testing may involve the deployment of stands. Therefore, it is important to mentally visualize the entire process so as not to miss anything.

After determining the complexity, it is necessary to include new work in the calendar, not forgetting about other tasks and activities that go in parallel.

And don't forget that plans are useless, but planning is priceless. Learn to adjust plans in a timely manner, keep everyone involved informed, and escalate in a timely manner so that missed deadlines do not come as a surprise to anyone.

Promote Demote

A question that cannot be answered in a short form. If it were simple, then the problem of missing deadlines would not exist.

To make development deadlines more predictable, we must first understand the reasons why programmers make mistakes all the time.

The first reason is that most of the tasks that a programmer does are unique to one degree or another. That is, most likely, the programmer will be doing a similar task for the first time. He doesn't have a good idea of ​​how long this work will take. If this is a programmer with solid experience and he had to perform a similar task, his assessment will be closer to reality.

Let's use a simple analogy - if you have never dug ditches, you cannot say exactly how long it will take you to dig a trench 30 cm wide, 60 cm deep and 20 meters long. If you have dug before, your estimate of the work time will be much closer to the actual duration of the work.

The second reason is that programmers are optimists by nature. That is, when considering a task, selecting an implementation option for it, and evaluating improvements, the developer expects that everything will work as he expects. And he doesn’t think about the problems that he will encounter along the way. Often he cannot foresee them. For example, there is a task that a programmer can implement using a third-party open-source software library. At the evaluation stage, he found it on the Internet, read its description - it suits him. And he even correctly estimated the amount of work he would have to do to build in the use of this library. But he did not at all foresee that an error would arise in this library in the environment of his software product.

The developer will have to not only build the use of the library into his code, but also fix a bug in the library itself. And often the developer does not provide time for correcting his errors. Statistics show that testing and fixing errors can take up about 50% of the time spent on coding. The figure depends on the qualifications of the developer, the environment, and the development practices used (for example, unit tests significantly reduce this time and the final duration/labor intensity of the development task is less).

If we return to the analogy with the digger, the digger did not expect that his shovel would break and he would have to spend two hours looking for a new cutting.

The third reason is unforeseen requirements. In no other area of ​​material production, with which customers are so fond of comparing software development, is there such a flow of new requirements. Imagine the passage of a digger who dug 19 meters out of 20 and heard from the customer a wish that the ditch should not go in a straight line, but in a snake with an arm length of 97 centimeters.

How to deal with all this and how to live in conditions of such uncertainty? Reducing uncertainty and building in time reserves.

The easiest way to bring your expectations closer to reality is to use the playful Pi rule of thumb. Having received an estimate from the developer (in terms of time or labor intensity), you need to multiply it by Pi (= 3.14159). The more experienced the developer has done the assessment, the lower this ratio may be.

The practice of decomposing the original problem into small tasks of no more than 4 hours in size is mandatory. The more detailed the decomposition is, the higher the chances that the estimate will be close to the actual complexity/duration.
If we return to the allocation of reserve, this time should be allocated at the end of the project. It is a bad practice to make a reserve and include it for each task. Parkinson’s Law “Work fills all the time allotted to it” is strictly followed.

To sum it up briefly, in order to correctly determine the deadlines for completing the work, the following actions will be useful:

  • perform a work decomposition, breaking the task down into as detailed steps as possible;
  • carry out prototyping;
  • limit the implementation of previously unforeseen requirements. This does not mean that they do not need to be done, but it is advisable to highlight these requirements and agree with the customer on changes in the timing and cost for their implementation;
  • take into account the time required to stabilize the solution;
  • use practices to improve code quality, such as writing unit tests;
  • lay down a general reserve.

Well, remember that if a fact exceeds your estimate by 30%, then this is a very good result.

Promote Demote

For the most accurate assessment, you need experience in real development, and specifically in a specific area. But there are also general rules that will help you avoid mistakes in planning and problems when delivering the work to the customer. I would describe these rules like this.

First, you need to understand the problem. This seems obvious and does not directly relate to timing estimates, but in fact it is a key point. Even in serious large projects, one of the main factors of failure and delay is the problem in defining requirements. For beginning developers, unfortunately, this is a serious problem - they don’t read the technical specifications or they read and understand very selectively (out of ten points, they remembered and completed five, and remembered the rest when submitting the result). It is clear that a misunderstood task cannot be implemented correctly on time.

Next is to estimate the development time itself. The peculiarity of programming is that there are no absolutely identical tasks. This makes our work more interesting, but estimating deadlines is more difficult. Decomposition works well here, i.e. dividing a complex, unique problem into a sequence of small, familiar subtasks. And each of them can already be assessed in hours quite adequately. Let's add up the estimates of the subtasks and get an estimate for the entire task.

As a rule, such an estimate only includes the costs of coding itself. This is, of course, the most important part of the development, but far from the only one (and often not the most voluminous). Complete completion of the task also includes reading and clarifying the specification, meetings with colleagues or the customer, debugging and testing, drawing up documentation, delivery of the result (demonstration to the customer and possible modifications based on his comments). Only experience will tell you exactly how long it will take you to complete these actions. At first, it is important, at a minimum, not to forget to take them into account in the calculations, and you can ask more experienced colleagues for an approximate estimate of time.

So, we take an estimate of the labor costs for coding, add an estimate of the costs of additional work - and we get the required estimate of the time to complete the task. But that's not all! You need to indicate the planned completion date for the task. It would be a mistake to simply divide the labor costs (in hours) by 8 hours and add them to the current date. In real practice, a developer never (okay, almost never) works 100% of the time on one specific task. You will definitely spend time on other work - important, but not directly related to the main one. For example, helping colleagues, training, writing reports, etc. Typically, when planning, it is believed that 60-70% of the working time is spent directly working on the current project. Additionally, you need to take into account possible delays that will prevent you from continuously working on the task. For example, if for this you need to interact with other people (colleagues, customers), then take into account their availability, work schedule, etc.

Here are the basic rules that, in my opinion, will help the developer avoid problems in estimating and meeting deadlines. In addition, the key is to accumulate your own experience both in implementing tasks and in assessment. For example, it is very useful after completing a task to compare your initial estimate with the actual deadlines and draw conclusions for the future. And, of course, it is worth studying other people's experiences. I would recommend the books on the topic by S. McConnell “How much does a software project cost” and S. Arkhipenkov “Lectures on software project management.”

Promote Demote

When estimating and planning deadlines, you must:

  1. Decompose the task into small functional pieces in such a way that there is a clear understanding of how long it will take to develop each such piece.
  2. In parallel with the decomposition, additional questions will certainly arise regarding functionality that was not described in the problem statement. It is necessary to obtain answers to such questions, since this directly relates to the scope of work and, therefore, timing.
  3. Add a certain percentage of risks to the final assessment. This is determined empirically. You can start, for example, with risks of 10–15%.
  4. Understand how many hours a day a programmer is willing to devote to completing a task.
  5. We divide the final estimate by the number of hours we allocate per day and get the number of days required for implementation.
  6. We focus on the calendar and the required number of days to complete. We take into account weekends and other days when the programmer will not be able to work on the task, as well as the start date of work (the developer is not always ready to take on the task on the same day). Thus, we get the start and end date of the work.

Promote Demote

In our company, task planning always goes through several stages. On the business side, we formulate 5-6 strategic goals for the year. These are high-level tasks, for example, increasing some parameter by so many percent. Next, various divisions of the company formulate business tasks for all IT teams. The deadlines for these tasks receive an initial rough estimate, which is often formed by all team members - manager, analyst, developer and tester. Once the business receives this assessment, it prioritizes tasks based on the company's strategic goals. Cross-cutting strategic goals help with this; with them, it becomes obvious that we are all working for some common cause; there is no such situation when someone is only pulling in their own direction. We collect sprints from tasks accurately estimated in terms of deadlines. For some teams they are quarterly, for others they are monthly. For several tasks that, according to preliminary estimates, will fall into the next sprint, the teams give an accurate estimate. Large tasks are divided into lower-level ones, for each of which a specific performer is responsible, and it is he who gives an accurate assessment.

At this stage, it is important not to forget to add a reserve of time to fix bugs, because only those who do nothing make no mistakes. Both Product Owners and business customers understand this very well. At the same time, the required amount of time must be adequate: no one will understand a developer who sets a deadline for a simple task that is too long; he will be asked to justify the decision. The most difficult thing is to explain to the business why it takes time to refactor. We are grateful to our company for the fact that from time to time we succeed in this, because ultimately, refactoring leads to simplification of the infrastructure and putting the code in order, which increases the stability of the system and can significantly speed up the development of new functions.

Sometimes errors in assessment still occur. In my opinion, it is impossible for the development department in large companies with developed infrastructure to completely avoid this. In this case, it is important that the developer promptly informs his manager about what is happening, and he, in turn, manages to warn the business and “replay” something in the company’s general plans. Working in this mode is much more correct than frantically trying to do in 3 days what takes 5, and then drowning in a large number of errors that arose due to such haste.

Promote Demote

The correct answer to both parts of the question [how to learn how to plan correctly and deliver a project on time - Red.] - experience. There are no other ways to “know Zen.” According to decision theory, any accurate conclusions can be drawn only based on the analysis of a number of already available data. And the more data there is, the more accurate the final forecast and assessment.

In the words of Herbert Shaw: “Experience is the school in which a man learns what a fool he was before.” This leads to a fairly simple conclusion: if a programmer already has experience that correlates with the task at hand, he can rely on it; if not, he can rely on the experience of his “colleagues.”

Next, you need to understand that direct planning of deadlines is a task that people cope with very, very poorly, especially in development. When estimating due dates, it is considered good practice to introduce “adjustment factors” to the original estimate. This metric can range from 1.5 to 3, depending on the experience of the developer and the totality of degrees of uncertainty of the tasks being solved within the project.

Promote Demote

It is important to consider many factors when determining deadlines.

For example, work experience. How clearly do you understand the scope of the work ahead? Have you done anything like this before? It is clear that the more experience, the faster the work will be completed.

A well-written technical specification plays a significant role in determining deadlines. Things are very difficult with this in our area. Often the client himself does not know what he wants, so I advise you to spend an extra day or two, but get a clear idea from the client about the desired result. It is important that this understanding is mutual. And only after this can you begin to negotiate the amount and terms.

Also, always include risks. For beginners, I recommend multiplying the estimated completion time by two. After all, it is better to deliver a project ahead of schedule and grow as a specialist in the eyes of the customer, rather than to deliver it later and ruin your reputation.

Promote Demote

A general recommendation is that the developer needs to learn how to correctly decompose tasks, always look for possible pitfalls, rely on their own experience and do not forget to warn customers and colleagues in a timely manner if the task cannot be solved within the specified time frame.

Building a clear plan is much more difficult than determining the deadline for completing a single task. At the same time, it is important not only to deliver the project on time, but also to ensure that the system you develop correctly solves business problems. Here, IT teams are helped by various software development methodologies: from RUP and MSF to SCRUM and other Agile formats. The choice of tools is very extensive, and many of our customers want to understand in advance how we will work with them in the project, what principles we adhere to.

By the way, the topic of Agile today is becoming close to both business and even in individual projects to the public sector, since the principles of this methodology make it possible to implement projects very quickly, managing customer expectations at each iteration. For example, in an Agile team there are practically no protracted discussions with the customer. Forget about dozens of pages describing unnecessary technical details, such as how quickly a drop-down list appears. Give the customer the opportunity to try an intermediate version of the system, then it will become much easier for you to understand each other.

The Agile team plans everything together and determines the optimal level of labor that will be needed to solve a particular problem. For example, one of the techniques is called “Poker Planning”, where each participant anonymously gives his assessment of the required labor costs for a specific task. After this, the team determines the average weight of the task in story points or man-hours and distributes tasks according to the principle of “who likes what.” At the same time, every day the team gathers for a 15-minute meeting, when everyone talks about the status of their current tasks in a couple of minutes, including reporting any difficulties that have arisen. The team quickly fixes the detected problem, so the customer looks at the next stage of the programmer’s work as quickly as possible. Developers do not delay the completion of tasks due to a reluctance to bother the team once again or futile attempts to figure it out on their own, killing precious time. By the way, at such mini-statuses, developers have a desire to show their best side, to show that you approach your work responsibly. It really motivates and self-disciplines.

(time from work becomes included until it is completed in the case of periodic activity, or until the system responds and hands the first user exit in the case of interactive activity); or maximization justice(an equal amount of CPU time for each process, or more generally corresponding times according to the priority and workload of each process). In practice, these goals are often in conflict (eg, throughput versus latency), so the scheduler will make an appropriate tradeoff. Preference is measured by any one of the issues mentioned above, depending on the user's needs and objectives.

OS/360 and successors

AIX

In AIX Version 4, there are three possible settings for the thread scheduling policy:

  • First, first out: Once a thread with this policy is scheduled, it runs until completion, unless it is blocked, it voluntarily relinquishes control of the processor, or a higher priority thread becomes dispatchable. Only fixed priority threads can have a FIFO scheduling policy.
  • Round Robin: This is similar to the AIX Version 3 circuit scheduler that cycles based on 10ms time slices. When a PP thread has control at the end of a time slot, it moves to the tail of the queue of threads with the same priority. Only fixed priority threads can have a Round Robin scheduling policy.
  • OTHER: This policy is implementation-defined by POSIX1003.4a. In AIX Version 4, this policy is defined as equivalent to RR, except that it applies to non-fixed priority threads. Recalculating a running thread's priority value for each interrupt means that a thread can lose control because its priority value has risen higher than another thread. This is AIX Version 3 behavior.

Threads are primarily of interest to applications that currently consist of multiple asynchronous processes. These applications can impose a light load on the system if converted to a multi-threaded structure.

AIX 5 implements the following scheduling policies: FIFO, round-robin, and fair round-robin. The FIFO policy consists of three different implementations: FIFO, FIFO2 and FIFO3. The robin's round-robin policy is called SCHED_RR in AIX and the fair round-robin policy is called SCHED_OTHER.

Linux

Linux 2.4

Brain Fuck Scheduler (BFS), also created by Kolivas, is an alternative to CFS.

FreeBSD

FreeBSD uses a multi-level feedback queue with priorities in the range 0-255. 0-63 are reserved for interrupts, 64-127 for the upper half of the kernel, 128-159 for real-time user threads, 160-223 for time-sharing user threads, and 224-255 for idle user threads. Also, like Linux, it uses an active queue setup, but it also has an idle queue.

Introduction

The purpose of the workshop on production organization is to expand and deepen theoretical knowledge, to instill the necessary skills for solving the most frequently encountered problems in practice regarding the organization and planning of production.

The workshop includes tasks for the main sections of the course. At the beginning of each topic, brief methodological instructions and theoretical information, typical problems with solutions and problems for independent solution are presented.

The presence of methodological instructions and brief theoretical information in each topic allows you to use this workshop for distance learning.


Calculation of production cycle duration

The duration of the production cycle serves as an indicator of the efficiency of the production process.

Production cycle– the period of stay of objects of labor in the production process from the moment of launching raw materials until the moment of release of finished products.

The production cycle consists of working hours, during which labor is expended, and break times. Breaks, depending on the reasons that caused them, can be divided into:

1) on natural or technological - they are determined by the nature of the product;

2) organizational(breaks between shifts).

The duration of the production cycle consists of the following components:

T cycle = t those + t eats + t tr + t k.k. + t m.o. + t m.ts.

Where t those– time of technological operations;

t eats - time of natural processes (drying, cooling, etc.);

t tr – transportation time of objects of labor;

t k.k. – quality control time;

t m.o – interoperative care time;

t m.c. – storage time in inter-shop warehouses;

(t three t k.k. can be combined with t m.o).

The calculation of the production cycle time depends on the type of production. In mass production, the duration of the production cycle is determined by the time the product is in production, i.e.

T cycle = t in M,

Where t V– release stroke;

M- number of workplaces.

Under release stroke it is necessary to understand the time interval between the release of one manufactured product and the next product.

The release stroke is determined by the formula

t in = Teff /V,

Where Tef– effective fund of worker time for the billing period (shift, day, year);

IN– volume of output for the same period (in natural units).

Example: T cm = 8 hours = 480 min; T per = 30 min; → Teff = 480 – – 30 = 450 min.

B = 225 pcs; → t in = 450/225 = 2 min.

In serial production, where processing is carried out in batches, the duration of the technological cycle is determined not per unit of product, but for the entire batch. Moreover, depending on the method of launching a batch into production, we get different cycle times. There are three ways of moving products in production: sequential, parallel and mixed (series-parallel).


I. At sequential When moving parts, each subsequent operation begins only after the previous one has finished. The cycle duration for sequential movement of parts will be equal to:

Where n – number of parts of the batch being processed;

t pcsi- piece rate of time for an operation;

C i– number of jobs per i th operation;

m– number of technological process operations.

A batch of products consisting of 5 pieces is given. The batch is passed sequentially through 4 operations; the duration of the first operation is 10 minutes, the second is 20 minutes, the third is 10 minutes, the fourth is 30 minutes (Fig. 1).

Picture 1

T cycle = T last = 5·(10+20+10+30) = 350 min.

The sequential method of moving parts has the advantage that it ensures the operation of the equipment without downtime. But its disadvantage is that the duration of the production cycle in this case is the longest. In addition, significant stocks of parts are created at work sites, which requires additional production space.

II. At parallel During the movement of the batch, individual parts are not detained at work stations, but are transferred individually to the next operation immediately, without waiting for the processing of the entire batch to be completed. Thus, with the parallel movement of a batch of parts, at each workplace various operations are simultaneously performed on different parts of the same batch.

The processing time of a batch with parallel movement of products is sharply reduced:

dl .

Where n n– number of parts in transfer batch(transport batch), i.e. the number of products simultaneously transferred from one operation to another;

Length – the longest operating cycle.

When launching a batch of products in parallel, the parts of the entire batch are processed continuously only at those workplaces where long operations follow short ones. In cases where short operations follow long ones, i.e. longer (in our example, the third operation), these operations are performed discontinuously, i.e. equipment is idle. Here, a batch of parts cannot be processed immediately, without delays, since the previous (long) operation does not allow this.

In our example: n= 5, t 1 = 10; t 2 = 20; t 3 = 10; t 4 = 30; With= 1.

T steam = 1·(10+20+10+30)+(5-1)·30=70+120 = 190 min.

Let's consider the diagram of parallel movement of parts (Fig. 2):

Figure 2

III. To eliminate interruptions in the processing of individual parts of a batch in all operations, use parallel-serial or mixed a launch method in which parts (after processing) are transferred to the next operation one by one, or in the form of “transport” batches (several pieces) in such a way that the execution of operations is not interrupted at any workplace. In the mixed method, the continuity of processing is taken from the sequential method, and the transition of the part from operation to operation immediately after its processing is taken from the parallel method. With a mixed method of launching into production, the cycle duration is determined by the formula

core .

where is the cor. – the shortest operating cycle (from each pair of adjacent operations);

m-1 number of combinations.

If the subsequent operation is longer than the previous one or equal in time, then this operation is started individually, immediately after processing the first part in the previous operation. If, on the contrary, the subsequent operation is shorter than the previous one, then interruptions occur here during piece transfer. To prevent them, it is necessary to accumulate a transport reserve of such a volume that is sufficient to ensure work at the subsequent operation. To practically find this point on the graph, it is necessary to transfer the last part of the batch and move the duration of its execution to the right. The processing time for all other parts in the batch is plotted to the left on the graph. The beginning of processing the first part indicates the moment when the transport backlog from the previous operation must be transferred to this operation.

If adjacent operations are the same in duration, then only one of them is considered short or long (Fig. 3).

Figure 3

T last pairs = 5·(10+20+10+30)-(5-1)·(10+10+10) = 350-120 = 230 min.

The main ways to reduce the production cycle time are:

1) Reducing the labor intensity of manufacturing products by improving the manufacturability of the manufactured design, using computers, and introducing advanced technological processes.

2) Rational organization of labor processes, arrangement and maintenance of workplaces based on specialization and cooperation, extensive mechanization and automation of production.

3) Reduction of various planned and unplanned breaks at work based on the rational use of the principles of scientific organization of the production process.

4) Acceleration of reactions as a result of increasing pressure, temperatures, transition to a continuous process, etc.

5) Improving the processes of transportation, storage and control and combining them in time with the processing and assembly process.

Reducing the duration of the production cycle is one of the serious tasks of organizing production, because affects the turnover of working capital, reducing labor costs, reducing storage space, the need for transport, etc.

Tasks

1 Determine the duration of the processing cycle of 50 parts with sequential, parallel and serial-parallel types of movement in the production process. The process of processing parts consists of five operations, the duration of which is, respectively, min: t 1 =2; t 2 =3; t 3 =4; t 4 =1; t 5 =3. The second operation is performed on two machines, and each of the others on one. The size of the transfer lot is 4 pieces.

2 Determine the duration of the processing cycle of 50 parts with sequential, parallel and serial-parallel types of movement in the production process. The process of processing parts consists of four operations, the duration of which is, respectively, min: t 1 =1; t 2 =4; t 3 =2; t 4 =6. The fourth operation is performed on two machines, and each of the others on one. The size of the transfer lot is 5 pieces.

3 A batch of parts of 200 pieces is processed with parallel-sequential movement during the production process. The process of processing parts consists of six operations, the duration of which is, respectively, min: t 1 =8; t 2 =3; t 3 =27; t 4 =6; t 5 =4; t 6 =20. The third operation is performed on three machines, the sixth on two, and each of the remaining operations on one machine. Determine how the duration of the processing cycle for a batch of parts will change if the parallel-sequential version of the movement in production is replaced by a parallel one. The size of the transfer lot is 20 pieces.

4 A batch of parts of 300 pieces is processed with parallel-sequential movement during the production process. The process of processing parts consists of seven operations, the duration of which is, respectively, min: t 1 =4; t 2 =5; t 3 =7; t 4 =3; t 5 =4; t 6 =5; t 7 =6. Each operation is performed on one machine. Transfer lot – 30 pieces. As a result of improving production technology, the duration of the third operation was reduced by 3 minutes, the seventh - by 2 minutes. Determine how the processing cycle of a batch of parts changes.

5 A batch of blanks consisting of 5 pieces is given. The batch goes through 4 operations: the duration of the first is 10 minutes, the second is 20 minutes, the third is 10 minutes, the fourth is 30 minutes. Determine the cycle duration by analytical and graphical methods with sequential movement.

6 A batch of blanks consisting of four pieces is given. The batch goes through 4 operations: the duration of the first is 5 minutes, the second is 10 minutes, the third is 5 minutes, the fourth is 15 minutes. Determine the cycle duration by analytical and graphical methods with parallel movement.

7 A batch of blanks consisting of 5 pieces is given. The batch goes through 4 operations: the duration of the first is 10 minutes, the second is 20 minutes, the third is 10 minutes, the fourth is 30 minutes. Determine the cycle duration by analytical and graphical methods for serial-parallel motion.

8 Determine the duration of the technological cycle for processing a batch of products of 180 pieces. with parallel and sequential variants of its movement. Build processing process graphs. The size of the transfer lot is 30 pcs. Time standards and number of jobs in operations are as follows.

Everything that was described in the several previous sections was more oriented towards further research on the problem of the process’s own time and to a much lesser extent on practical applications. Filling this gap, we will outline one of the ways to calculate the proper time of a process based on statistical data on its evolution.

Let us consider a one-dimensional process, the state of which is characterized by a real variable x. Let us assume that observations of the dynamics of the process are carried out in astronomical time t, so that t = t k and x = x k, k =1, ..., n are fixed moments of observation and the corresponding values ​​of the process states. There are many different mathematical methods that make it possible to construct curves that either pass through the points (t k, Xk) or “best approach” to them. The functions x = x(t) obtained in this way give rise to the impression in our minds that the process under consideration depends on the mechanical motion of celestial bodies and, therefore, its state is expressed through astronomical time t. This conclusion could be taken into account; if constant difficulties did not arise when trying to predict the further course of the process. For a large number of different processes that are not directly related to the mechanical motions of celestial bodies, theoretical predictions obtained using the function x = x(t) outside the observation interval begin to deviate significantly from subsequent experimental data. They usually try to explain the reason for the discrepancy between theory and experiment by an unsuccessfully selected processing method, but this may not be the essence of the matter.

Any process that interests us occurs in the Universe. He certainly “feels” the influence of the movement of celestial bodies. However, this influence may turn out to be “non-rigid”, non-determining. This, in particular, may manifest itself in the fact that at certain intervals of astronomical time the state of the process remains unchanged. In this regard, let us recall the earlier example of a closed empty room, isolated from the outside world. Let's let just one live fly into the room. Over the course of several days, changes in the state of the “room-fly” system will depend on the movements of the fly, since changes in the state of the room cannot be expected. At the same time, it is difficult to imagine that the behavior of a fly is strictly connected with the course of astronomical time.

Having made such a long digression, let’s move on to describing the algorithm for calculating the process’s own time.

In this algorithm, the unit for calculating local maxima is chosen as a natural measure of time. In addition, possible sections of the stationary state of the process are taken into account, at which, as noted earlier, the proper time stops. Since the identity of two states can be said only within the limits of measurement accuracy, in what follows a certain positive number e is used - the permissible measurement error.

So, the input data for the algorithm is the natural number n, the positive number 8, arrays (tk) and (x k), k = 1, ..., n. For ease of programming, the algorithm is presented in the form of four sequentially executed modules.

Module 1, using the data p, e, t k), (x k), in the general case, forms new arrays 7 = (7+ X = (X t) and a very specific accompanying array P = (?), where 1 = 1, ..., t, and t<Сп. Основное назначение этого модуля -- выявление в массиве x k) последовательностей идентичных состояний процесса, сохранение первых элементов в таких последовательностях и удаление всех остальных и, наконец, уменьшение по определенному, правилу исходного интервала наблюдения от t до на сумму тех промежутков времени, в которых процесс протекает стационарно.

Module 1 includes the following procedures:

p: = 1, t: = 0, k: = 1.

In pp. 1, 2 counters with specific initial values ​​are introduced:

In pp. 3, 4 the counter values ​​increase by 1.

Check condition k^n. If it is completed, then go to step 6, otherwise go to step 11.

Check the inequality x k --x k = e. If it holds, then go to step 7, otherwise go to step 9.

7. tii = ti - (tkl - tk), i = k1, ..., p.

This procedure means that if the values ​​of Xk and Xk 1 are indistinguishable within the error, then all time points starting from tk are reduced by the amount tki-tk.

r = r. Return to point 4.

Tv = t k ; X v:=x k ; p = p v = v+l., i.e. elements of arrays T, X, P are formed and the next value v is assigned.

  • 10. Take (t k, ..., t n AND (Xk, - X n) as the initial arrays of dimension n--k 1 + 1 and then return to step 2.
  • 11. Print m, (T), (X,) and (P,), where i = l, ..., t. End.

Let us explain the meaning of the elements of the accompanying array P. From the previous text it follows that the value of pk is equal to the number of those elements of the array (xk) that directly follow and differ from x pi+ ...+, + by less than e. We also note that that pi+ ... +p m = n.

Example 1. Given: n = 20, (/*) = (2, 4, 7, 10, 12, 13, 15, 17, 20, 22, 24, 25,

  • 27, 30, 32, 33, 34, 35, 36) and (x,)= (4, 4, 6, 6, 6, 3, 2, 4, 3, 3, 3, 2, 2, 4, 5 , 5,
  • 5, 4, 3), see fig. 9, a.

As a result of executing module 1, m = 11 is obtained,

(G) = (2, 3, 4, 6, 8, 11, 1-2, 15, 17, 18, 19); (X,) = (4, 6, 3, 2, 4, 3, 2, 4,5,4,3)

i(d.) = (2, 4, 1, 1, 1.3, 2, 1.3, 1, 1), see fig. 9, b.

Module 2. The input data for it is a natural number m, as well as arrays (7+ (X L), = 1, ..., m. This module in the array (TJ identifies moments of time [TM a], 1 = 1 m (ml

Example 2. The values ​​m, (Ть) and (X,] are borrowed from the previous example. After completing module 2, we obtain ml = 3, m2 = 8, (Ш,) = (3, 8, 17), (Т*) = (3, 4, 6, 8, 11, 12, 15, 17), see also Fig. 9, b.

Module 3. Input data ml, m2, (TM n), 1 = 1, ..., ml, (G*), /2 = 1, ..., gn2.

This module is designed to construct an array (t(-r) using the formula

Where is TV 6 [TMp, TMn+i]

The variable t is the proper time generated by the change in the variable x. Its natural measure is the unit for calculating local maxima.

Example 3. The initial data for T 2) are the same as the values ​​of ml, m2 ITM, and in example 2. . After the appropriate calculations, we obtain Н = (0; 0.2; 0.6; 1; 1,33; 1,78; 2).

Module 4. Generates the output of results by establishing a correspondence between the values ​​of m and the elements x from the array (xk).

Example 4. Based on the data from examples 2 and 3, the following result is produced, see fig. 9, in:

t: 0; 0.2; 0.6; 1; 1.33; 1.44;

x: 6; 3; 2; 4; 3T 0 2;

Thus, the considered algorithm allows us to develop the concept of the process’s own time based on information about changes in the state of the process recorded on the astronomical time scale. It is quite clear that you can use other algorithms, based, for example, on calculating a sequence of local minima or a mixed sequence consisting of local maxima and minima. When processing experimental data, various options should probably be tested. If for some reason the experimenter chose one of the specific proper times and received arrays (t4 and (xk), then at the next stage he should use some mathematical methods to approximate the experimental points (t*, x) some approximate world line of the process x = x(t). By extrapolating this line beyond the initial observation period, he can make predictions about the further course of the process.

It is interesting to mention a computational experiment intended to evaluate the prospects of using the proposed algorithm. Data on annual river flows were chosen as experimental material. Vakhsh (Tajikistan) for the previous 40 years. During the same period of time, information was taken on the dynamics of the Wolf number - the most commonly used integral index of solar activity. The latter was the basis for developing the proper time of the solar activity process. By modern times, information on river expenditures has been transformed. Vakhsh and then, during the observation period, a theoretical dependence of water flow was given as a function of the proper time of solar activity. A characteristic feature of the resulting graph is the almost periodic behavior of maximum and minimum expenses. The costs, however, do not remain constant.