Operating-systems are a key component of most computer systems, responsible for managing the hardware and software. Available operating-systems, both commercial and open-source, vary greatly in their capabilities and application.the small end of the scale are embedded operating-systems, often performing highly specialised tasks on application-specific hardware (e.g. the software controlling a fuel-injected car engine, mobile phone or aeroplane guidance system). Commodity hardware platforms (such as the IBM PC), of which millions exist, require more complex general-purpose operating-systems (Microsoft Windows and Linux are two familiar instances). Towards the large end of the scale are operating-systems that manage massively parallel computing platforms, possibly distributed over networks. In whatever environment an operating-system is used, it must function correctly and handle errors gracefully.current operating-systems suffer (to varying degrees) from three major problems:. Incorrect implementation: the operating-system contains erroneous code, resulting in undesirable behaviour (with effects ranging from time-wasting to catastrophic).. Lack of scalability: the operating-system fails to scale beyond a single machine or small number of processors, limiting the upgradability of the hardware.. Lack of performance: the nature of the design and tools commonly used to develop operating-systems result in performance-damaging overheads -- the operating-system must ensure that badly-behaved programs (including components of the operating-system itself) do not inadvertently affect other parts of the system.proposed research addresses these problems through the design and development of concurrent operating-system components, that can simply be plugged-together to produce operating-systems with the desired capabilities, initially targeting a range of standardised embedded hardware (PC/104). To guarantee that connecting such components will work as expected requires a high degree of formalism, in particular, specification of their concurrent interactions.crucial aspect of this research concerns the dynamics of such networks -- allowing components and supporting connections to be generated and moved around while the system still runs. Such capability is helpful even for isolated uniprocessor plaforms, but is specially relevant for future multiprocessor chips and the likely total interconnect (wireless) of pervasive embedded systems.formalism comes from two process algebras -- Hoare's CSP and Milner's pi-calculus -- that can describe the behaviour of the proposed concurrent components. Crucially, it can reveal the precise behaviour of combined components, allowing bad combinations of components to be avoided at the design stage. By using CSP and pi-calculus aware design and programming tools, guarantees can also be made about the integrity of purely sequential code, particularly in light of the surrounding concurrency.is an increasing need for software technologies that allow concurrency to be exploited efficiently. Single-processor systems are gradually reaching their silicon limits and the major manufacturers are already looking towards hardware parallelism.new approach to software design is needed, as failure and sustainability become increasingly problematic. Systems are becoming complex to a degree where they are frequently delivered late (or not at all), over-budget and, in many cases, contain unknown failure conditions and behaviours. Modifying existing systems in the face of changing requirements is unworkable in many cases, resulting in the development of new systems from scratch, at substantial cost and inconvenience. The formalised concurrent approach offers scalability at a cost proportional to the size of the change, not the size of the system.
|