The worldwide animal feed industry consumed 635 million tons of feed (compound feed equivalent) in 2006, with an annual growth rate of about 2%. The use of agricultural land to grow feed rather than human food can be controversial; some types of feed, such as corn (maize), can also serve as human food, while others such as grass cannot. Some agricultural by-products which are fed to animals may be considered unsavory by human consumers.
The resulting CO2 emissions are equal to the amount of CO2 that the sugarcane plant used up from the atmosphere during its growing phase, which makes the process of cogeneration appear to be greenhouse gas-neutral. However when a full audit of energy used in production is done, 75% of the energy required to grow and move the sugar cane (including bagasse) is from liquid fuel (petroleum or hydrocarbon based), leading to a 25% net gain from photosynthesis. Ethanol produced from the sugar in sugarcane is a popular fuel in Brazil. The cellulose rich bagasse is now being tested for production of commercial quantities of cellulosic ethanol. Verenium Corporation (VRNM) is currently building a cellulosic ethanol plant based on cellulosic by-products like bagasse in Jennings, Louisiana. They are using a biotech approach to improve ethanol production above and beyond the midwest corn based ethanol production method. This will allow regional cellulosic ethanol production getting around the problem of ethanol transportation. The Verenium approach will get ethanol and E85 fuel to the important markets in California and the Northeast.
A base load power plant (or base load power station) is one that is best suited to serving this load because it takes a long time to start up and is relatively inefficient at less than full output. These plants run at all times through the year except in the case of repairs or scheduled maintenance.
Each base load power plant on a grid is allotted a specific amount of the baseload power demand to handle. The base load power is determined by the load duration curve of the system. For a typical power system, the rule of thumb is that the base load power is usually 35-40% of the maximum load during the year.
Peaks or spikes in customer power demand are handled by smaller and more responsive types of power plants.
Nuclear and coal power plants may take many hours, if not days, to achieve a steady state power output. On the other hand, they have low fuel costs. Since they require a long period of time to heat up to operating temperature, these plants typically handle large amounts of baseload demand. Different plants and technologies may have differing capacities to increase or decrease output on demand: nuclear plants are generally run at close to peak output continuously (apart from maintenance, refueling and periodic refurbishment), while coal-fired plants may be cycled over the course of a day to meet demand. Plants with multiple generating units may be used as a group to improve the "fit" with demand, by operating each unit as close to peak efficiency as possible.
In computer programming, BASIC is a family of high-level programming languages. The original BASIC was designed in 1964, by John George Kemeny and Thomas Eugene Kurtz at Dartmouth College, Hanover, New Hampshire, U.S., to provide access for non-science students to computers. At the time, nearly all use of computers required writing custom software, which was something only scientists and mathematicians tended to do. The language (in one variant or another) became widespread on microcomputers in the late 1970s and home computers in the 1980’s. BASIC remains popular to this day in a handful of highly modified dialects and new languages based on BASIC such as Microsoft Visual Basic.
Blends of biodiesel and conventional hydrocarbon-based diesel are products most commonly distributed for use in the retail diesel fuel marketplace. Much of the world uses a system known as the "B" factor to state the amount of biodiesel in any fuel mix: fuel containing 20% biodiesel is labeled B20, while pure biodiesel is referred to as B100. It is common to see B99, since 1% petrodiesel is sufficiently toxic to retard mold. Blends of 20 percent biodiesel with 80 percent petroleum diesel (B20) can generally be used in unmodified diesel engines.
Biodiesel has different solvent properties than petrodiesel, and will degrade natural rubber gaskets and hoses in vehicles (mostly found in vehicles manufactured before 1992), although these tend to wear out naturally and most likely will have already been replaced with FKM, which is nonreactive to biodiesel. Biodiesel has been known to break down deposits of residue in the fuel lines where petrodiesel has been used. As a result, fuel filters may become clogged with particulates if a quick transition to pure biodiesel is made. Therefore, it is recommended to change the fuel filters on engines and heaters shortly after first switching to a biodiesel blend.
Biodiesel use and production are increasing rapidly. Fueling stations make biodiesel readily available to consumers across Europe, and increasingly in the USA and Canada. A growing number of transport fleets use it as an additive in their fuel. Biodiesel is often more expensive to purchase than petroleum diesel but this is expected to diminish due to economies of scale and agricultural subsidies versus the rising cost of petroleum as reserves are depleted.
Biodiesel has better lubricating properties than that of today's diesel fuels. During the manufacture of these, to comply with low SO2 engine emission limits set in modern standards, severe hydro-treatment is included. Biodiesel addition reduces wear increasing the life of the fuel injection equipment that relies on the fuel for its lubrication, such as high pressure injection pumps, pump injectors (also called unit injectors) and fuel injectors.
One of the main drivers for adoption of biodiesel is energy security. This means that a nations dependence on oil is reduced, and substituted with use of locally available sources, such as coal, gas, or renewable sources. Thus significant benefits can accrue to a country from adoption of biofuels, even without a reduction in greenhouse gas emissions. Whilst the total energy balance is debated, it is clear that the dependence on oil is reduced. One example is the energy used to manufacture fertilizers, which could come from a variety of sources other than petroleum. The the US NREL says that energy security is the number one driving force behind the US biofuels programme and the White House "Energy Security for the 21st Century" makes clear that energy security is a major reason for promoting biodiesel. The EU commission president, Jose Manuel Barroso, speaking at a recent EU biofuels conference, stressed that properly managed biofuels have the potential to reinforce the EU's security of supply through diversification of energy sources.
The gases methane, hydrogen and carbon monoxide can be combusted or oxidized with oxygen. Air contains 21% oxygen. This energy release allows biogas to be used as a fuel. Biogas can be used as a low-cost fuel in any country for any heating purpose, such as cooking. It can also be utilized in modern waste management facilities where it can be used to run any type of heat engine, to generate either mechanical or electrical power. Biogas is a renewable fuel and electricity produced from it can be used to attract renewable energy subsidies in some parts of the world.
Biogas can be utilized for electricity production, cooking, space heating, water heating and process heating. If compressed, it can replace compressed natural gas for use in vehicles, where it can fuel an internal combustion engine or fuel cells.
Methane within biogas can be concentrated to the same standards as natural gas, when it is, it is called biomethane. If the local gas network permits it the producer of the biogas may be able to utilize the local gas distribution networks. Gas must be very clean to reach pipeline quality, and must be of the correct composition for the local distribution network to accept. Carbon dioxide, Water, hydrogen sulfide and particulates must be removed if present. If concentrated and compressed it can also be used in vehicle transportation. Compressed biogas is becoming widely used in Sweden, Switzerland and Germany. A biogas-powered train has been in service in Sweden since 2005.
Bates' and his biogas car were the subject of a short documentary film called 'Sweet as a Nut' in 1974, at which point he had run his car for 17 years on gas he had produced by processing pig manure. Bates, an inventor, lived in Devon, UK and in the film talks through the simple process and benefits of running a car on biogas. The conversion was simply made with an adapter attached to any combustion engine.
BSL-2 differs from BSL-1 in that: laboratory personnel have specific training in handling pathogenic agents and are directed by scientists with advanced training; access to the laboratory is limited when work is being conducted; extreme precautions are taken with contaminated sharp items; and certain procedures in which infectious aerosols or splashes may be created are conducted in biological safety cabinets or other physical containment equipment.
The business goals being attempted may be for-profit or non-profit. For-profit business plans typically focus on financial goals. Non-profit and government agency business plans tend to focus on service goals, although non-profits may also focus on maximizing profit. Business plans may also target changes in perception and branding by the customer, client, tax-payer, or larger community. A business plan having changes in perception and branding as its primary goals is called a marketing plan.
Business plans may be internally or externally focused. Externally focused plans target goals that are important to external stakeholders, particularly financial stakeholders. They typically have detailed information about the organization or team attempting to reach the goals. With for-profit entities, external stakeholders include investors and customers. External stake-holders of non-profits include donors and the clients of the non-profit's services. For government agencies, external stakeholders include tax-payers, higher-level government agencies, and international lending bodies such as the IMF, the World Bank, various economic agencies of the UN, and development banks.
Internally focused business plans target intermediate goals required to reach the external goals. They may cover the development of a new product, a new service, a new IT system, a restructuring of finance, the refurbishing of a factory or a restructuring of the organization. An internal business plan is often developed in conjunction with a balanced scorecard or a list of critical success factors. This allows success of the plan to be measured using non-financial measures. Business plans that identify and target internal goals, but provide only general guidance on how they will be met are called strategic plans.
Operational plans describe the goals of an internal organization, working group or department. Project plans, sometimes known as project frameworks, describe the goals of a particular project. They may also address the project's place within the organization's larger strategic goals.
A fossil fuel power plant's capital costs include the purchase of the land the plant is built on, permitting and legal costs, the equipment needed to run the plant, the cost of the plant's construction, the cost of financing and the cost of commissioning the plant incurred prior to commercial operation of the plant. They do not include the cost of the natural gas, fuel oil or coal used to fire the plant once the plant enters commercial operation or any taxes on the electricity that is produced. They also do not include the labor used to run the plant or the labor and supplies needed for maintenance.
Concentrations of carbon dioxide fall during the northern spring and summer as plants consume the gas, and rise during the northern autumn and winter as plants go dormant, die and decay. Carbon dioxide is a greenhouse gas as it transmits visible light but absorbs strongly in the infrared and near-infrared.
Carbon dioxide is used by plants during photosynthesis to make sugars which may either be consumed again in respiration or used as the raw material to produce polysaccharides such as starch and cellulose, proteins and the wide variety of other organic compounds required for plant growth and development. It is produced during respiration by plants, and by all animals, fungi and microorganisms that depend on living and decaying plants for food, either directly or indirectly. It is, therefore, a major component of the carbon cycle. Carbon dioxide is generated as a by-product of the combustion of fossil fuels or the burning of vegetable matter, among other chemical processes. Large amounts of carbon dioxide are emitted from volcanoes and other geothermal processes such as hot springs and geysers and by the dissolution of carbonates in crustal rocks.
Carbon dioxide has no liquid state at pressures below 5.1 atm, but is a solid at temperatures below -78 °C. In its solid state, carbon dioxide is commonly called dry ice. CO2 is an acidic oxide: an aqueous solution turns litmus from blue to pink. CO2 is toxic in higher concentrations: 1% (10,000 ppm) will make some people feel drowsy. Concentrations of 7% to 10% cause dizziness, headache, visual and hearing dysfunction, and unconsciousness within a few minutes to an hour.
Carbon dioxide is used by the food industry, the oil industry, and the chemical industry. It is used in many consumer products that require pressurized gas because it is inexpensive and nonflammable, and because it undergoes a phase transition from gas to liquid at room temperature at an attainable pressure of approximately 60 bar (870 psi, 59 atm), allowing far more carbon dioxide to fit in a given container than otherwise would. Life jackets often contain canisters of pressured carbon dioxide for quick inflation. Aluminum capsules are also sold as supplies of compressed gas for airguns, paintball markers, for inflating bicycle tires, and for making seltzer. Rapid vaporization of liquid carbon dioxide is used for blasting in coal mines. High concentrations of carbon dioxide can also be used to kill pests, such as the Common Clothes Moth.
Martin Roberts wrote Citect for DOS, released in 1987, as a response to the limited range of PC-based operator interface software available at the time. Citect for DOS consisted of a configuration database (in dBase format), a bitmap (256 color raw format) and an animation file. The user would draw a representation of a facility using the readily available Dr Halo graphical package and placing "Animation Points" in the desired location. "Tags" were assigned in the configuration databases, equating to addresses within the programmable electronic devices Citect was communicating with. By referencing these tags at animation points using other configuration databases, the user could show the state of equipment such as running, stopped or faulted in real-time.
Citect for DOS could communicate with various programmable electronic devices via the various serial links offered by the device; some through direct PC serial port connections, others through 3rd party PC based cards designed to communicated with the target programmable electronic device. Software drivers were written for many protocols; its ability to communicate with a variety of devices - and to have new drivers written when required - became a primary selling point for Citect. The runtime software ran on a DSI card; a 32 bit co-processor that was inserted into an available ISA slot in the PC. This was due to insufficient processing power available in the 286 and 386 PCs available at the time.
At this time Citect for Windows had the dominant market share of PC based operator interface software but new competitor software was catching up to the features and functionality of Citect and gaining in popularity. Citect began to focus more on remaining competitive; version 5 was released containing mainly features aimed at keeping the software at the leading edge of the market. Version 6 continued this trend and included more SCADA-like functionality in addition to the poll-based real-time control system that still remains the core of the Citect software today. Version 7 was released in August 2007 and currently is the latest version of CitectSCADA.
Citric acid exists in a variety of fruits and vegetables, most notably citrus fruits. Lemons and limes have particularly high concentrations of the acid; it can constitute as much as 8% of the dry weight of these fruits (1.44 and 1.38 grams per ounce of the juices, respectively). The concentrations of citric acid in citrus fruits range from .005 mol/L for oranges and grapefruits to .030 mol/L in lemons and limes. These values will vary depending on the circumstances in which the fruit was grown.
Citric acid is used in biotechnology and the pharmaceutical industry to passivate high-purity process piping (in lieu of using nitric acid). Nitric acid is considered hazardous to dispose once used for this purpose, while citric acid is not.
Citric acid is the active ingredient in some bathroom and kitchen cleaning solutions. A solution with a 6% concentration of citric acid will remove hard water stains from glass without scrubbing. In industry it is used to dissolve rust from steel.
Citric acid is one of the chemicals required for the synthesis of HMTD, a highly heat-, friction-, and shock-sensitive explosive similar to acetone peroxide. For this reason, purchases of large quantities of citric acid may rouse suspicion of potential terrorist activity.
Citric acid can be added to ice cream to keep fat globules separate, and can be added to recipes in place of fresh lemon juice as well. Citric acid is used along with sodium bicarbonate in a wide range of effervescent formulae, both for ingestion (e.g., powders and tablets) and for personal care (e.g., bath salts, bath bombs, and cleaning of grease).
Citric acid is commonly employed in wine production as a substitute or improver where fruits containing little or no natural acidity are used. Citric acid aids fermentation and is used in preference to other acids, for example tartaric acid, because of its stability, pleasant flavor and ability to impart brilliance to the finished wine.
When applied to hair, citric acid opens up the outer layer, also known as the cuticle. While the cuticle is open, it allows for a deeper penetration into the hair shaft. It can be used in shampoo to wash out wax and coloring from the hair. It is notably used in the product "Sun-in" for bleaching, but is generally not recommended due to the amount of damage it causes.
Citric acid is also used as a stop bath in photography. The developer is normally alkaline, so a mild acid will neutralize it, increasing the effectiveness of the stop bath when compared to plain water.
Citric acid is used as one of the active ingredients in the production of anti-viral tissues.
Cleanrooms can be very large. Entire manufacturing facilities can be contained within a cleanroom with factory floors covering thousands of square meters. They are used extensively in semiconductor manufacturing, biotechnology, the life sciences and other fields that are very sensitive to environmental contamination.
The air entering a cleanroom from outside is filtered to exclude dust, and the air inside is constantly recirculated through high efficiency particulate air (HEPA) and ultra low penetration air (ULPA) filters to remove internally generated contaminants. Staff enter and leave through airlocks (sometimes including an air shower stage), and wear protective clothing such as hats, face masks, gloves, boots and cover-alls.
Equipment inside the cleanroom is designed to generate minimal air contamination. There are even specialized mops and buckets. Cleanroom furniture is also designed to produce a low amount of particles and to be easy to clean. Common materials such as paper, pencils, and fabrics made from natural fibers are often excluded; however, alternatives are available. Cleanrooms are not sterile (i.e., free of uncontrolled microbes) and more attention is given to airborne particles. Particle levels are usually tested using a particle counter. Some cleanrooms are kept at a positive pressure so that if there are any leaks, air leaks out of the chamber instead of unfiltered air coming in.
Some cleanroom HVAC systems control the humidity to relatively low levels, such that extra precautions are necessary to prevent ESD electrostatic discharge problems. These ESD controls ("ionizers") are also used in rooms where ESD sensitive products are produced or handled.
Low-level cleanrooms may only require special shoes, ones with completely smooth soles that do not track in dust or dirt. However, shoe bottoms must not create slipping hazards (safety always takes precedence). Entering a cleanroom usually requires wearing a cleanroom suit.
In cheaper cleanrooms, in which the standards of air contamination are less rigorous, the entrance to the cleanroom may not have an air shower. There is an anteroom, in which the special suits must be put on, but then a person can walk in directly to the room.
Some manufacturing facilities do not use fully classified cleanrooms, but use some cleanroom practices together to maintain their cleanliness requirements.
Cleanrooms are classified according to the number and size of particles permitted per volume of air. Large numbers like "class 100" or "class 1000" refer to US FED STD 209E, and denote the number of particles of size 0.5 µm or larger permitted per cubic foot of air. The standard also allows interpolation, so it is possible to describe e.g. "class 2000".
Small numbers refer to ISO 14644-1 standards, which specify the decimal logarithm of the number of particles 0.1 µm or larger permitted per cubic meter of air. So, for example, an ISO class 5 clean room has at most 105 = 100,000 particles per m³.
Note that both FS 209E and ISO 14644-1 are based on assumed log-log relationships between particle size and particle concentration. For that reason, there are no "zero" particle concentrations listed. The table locations without entries are N/A ("not applicable") combinations of particle sizes and cleanliness classes. They should not be read as zero.
Because 1 m³ is approximately 35 ft³, the two standards are mostly equivalent when measuring 0.5 µm particles, although the testing standards differ. Ordinary room air is approximately class 1,000,000 or ISO 9.
Conventional power plants emit the heat created as a by-product of electricity generation into the environment through cooling towers, flue gas, or by other means. CHP or a bottoming cycle captures the by-product heat for domestic or industrial heating purposes, either very close to the plant, or —especially in Scandinavia and Eastern Europe—for distribution through pipes to heat local housing.
In the United States, Con Edison produces 30 billion pounds of steam each year through its seven cogeneration plants (which boil water to 1,000°F/538°C before pumping it to 100,000 buildings in Manhattan—the biggest commercial steam system in the world.
By-product heat at moderate temperatures (212-356°F/100-180°C) can also be used in absorption chillers for cooling. A plant producing electricity, heat and cold is sometimes called trigeneration or more generally: polygeneration plant.
Cogeneration is a thermodynamically efficient use of fuel. In separate production of electricity some energy must be rejected as waste heat, but in cogeneration this thermal energy is put to good use.
Thermal power plants (including those that use fissile elements or burn coal, petroleum, or natural gas), and heat engines in general, do not convert all of their available energy into electricity. In most heat engines, a bit more than half is wasted as excess heat (see: Second law of thermodynamics). By capturing the excess heat, CHP uses heat that would be wasted in a conventional power plant, potentially reaching an efficiency of up to 89%, compared with 55% for the best conventional plants. This means that less fuel needs to be consumed to produce the same amount of useful energy. Also, less pollution is produced for a given economic benefit.
Some tri-cycle plants have utilized a combined cycle in which several thermodynamic cycles produced electricity, and then a heating system was used as a condenser of the power plant's bottoming cycle. For example, the RU-25 MHD generator in Moscow heated a boiler for a conventional steam power plant, whose condensate was then used for space heat. A more modern system might use a gas turbine powered by natural gas, whose exhaust powers a steam plant, whose condensate provides heat. Tri-cycle plants can have thermal efficiencies above 80%.
An exact match between the heat and electricity needs rarely exists. A CHP plant can either meet the need for heat (heat driven operation) or be run as a power plant with some use of its waste heat.
CHP is most efficient when the heat can be used on site or very close to it. Overall efficiency is reduced when the heat must be transported over longer distances. This requires heavily insulated pipes, which are expensive and inefficient; whereas electricity can be transmitted along a comparatively simple wire, and over much longer distances for the same energy loss.
A car engine becomes a CHP plant in winter, when the reject heat is useful for warming the interior of the vehicle. This example illustrates the point that deployment of CHP depends on heat uses in the vicinity of the heat engine.
Cogeneration plants are commonly found in district heating systems of big towns, hospitals, prisons, oil refineries, paper mills, wastewater treatment plants, thermal enhanced oil recovery wells and industrial plants with large heating needs.
Thermally enhanced oil recovery (TEOR) plants often produce a substantial amount of excess electricity. After generating electricity, these plants pump leftover steam into heavy oil wells so that the oil will flow more easily, increasing production. TEOR cogeneration plants in Kern County, California produce so much electricity that it cannot all be used locally and is transmitted to Los Angeles.
Conventional power plants emit the heat created as a by-product of electricity generation into the environment through cooling towers, flue gas, or by other means. CHP or a bottoming cycle captures the by-product heat for domestic or industrial heating purposes, either very close to the plant, or —especially in Scandinavia and Eastern Europe—for distribution through pipes to heat local housing. This is also called decentralized energy.
In the United States, Con Edison produces 30 billion pounds of steam each year through its seven cogeneration plants (which boil water to 1,000°F/538°C before pumping it to 100,000 buildings in Manhattan—the biggest commercial steam system in the world.
Perhaps the first modern use of energy recycling was done by Thomas Edison. His 1882 Pearl Street Station, the world’s first commercial power plant, was a combined heat and power plant, producing both electricity and thermal energy while using waste heat to warm neighboring buildings. Recycling allowed Edison’s plant to achieve approximately 50 percent efficiency.
By the early 1900s, regulations emerged to promote rural electrification through the construction of centralized plants managed by regional utilities. These regulations not only promoted electrification throughout the countryside, but they also discouraged decentralized power generation, such as cogeneration. They even went so far as to make it illegal for non-utilities to sell power.
By 1978, Congress recognized that efficiency at central power plants had stagnated and sought to encourage improved efficiency with the Public Utility Regulatory Policies Act (PURPA), which encouraged utilities to buy power from other energy producers.
Cogeneration plants proliferated, soon producing about 8 percent of all energy in the U.S. However, the bill left implementation and enforcement up to individual states, resulting in little or nothing being done in many parts of the country.
In 2008 Tom Casten, chairman of the company Recycled Energy Development, said that "We think we could make about 19 to 20 percent of U.S. electricity with heat that is currently thrown away by industry."
Outside the U.S., energy recycling is more common. Denmark is probably the most active energy recycler, obtaining about 55% of its energy from cogeneration and waste heat recovery. Other large countries, including Germany, Russia, and India, also obtain a much higher share of their energy from decentralized sources.
In the Netherlands and Belgium large improvements on Yield are achieved by harvesting the full plant and crushing it while harvesting. The substance is primarily used as the food for cows during the winter season. It is known as "kuilmais."
The uses for corn stover are growing over time. One use of corn stover pertains to corn producers who also raise cattle. Corn stover can be beneficial to some cattle producers because the corn stover can provide a low cost feed source for mid-gestation beef cows. In addition to the stalks, leaves, husks, and cobs remaining in the field, kernels of grain may also be left over from harvest.
These left over kernels, along with the corn stover, serve as an additional feed source for grazing cattle. Over time, the stalks will decrease in value as feed, so it is important to graze the corn stover as soon as possible after harvest.
The amount of grazing possible on a field of corn stover is "between one and two months of grazing per cow per acre (50 cows on 50 acres for one to two months)."
Another recent and important use for corn stover is biomass ethanol. Biomass ethanol is ethanol made from non-grain plant materials known as biomass. Ethanol production is made possible from the large availability of corn grain. Biomass ethanol would use the corn stover from the corn crop produced in areas around ethanol plants. Corn stover, due to the relative close proximity of the corn grain produced for ethanol production, is by far the most abundant crop residue readily available today. The free accessibility to corn stover makes it a prime candidate for biomass ethanol production. Corn stover serves many purposes in today’s agricultural economy and will continue to do so in the future.
The term cost management is widely used in business today. Unfortunately cost management has no uniform definition. Cost management generally describes the approaches and activities of managers in short run and long run planning and control decisions that increase value for customers and lower costs of products and services. For example, managers make decisions regarding the amount and kind of material being used, changes of plant processes, and changes in product designs. Information from accounting systems helps managers make such decisions, but the information and the accounting systems themselves are not cost management.
Cost management has a broad focus. It includes – but is not confined to – the continuous control of costs. The planning and control of costs is usually inextricably linked with revenue and profit planning. For instance, to enhance revenues and profits, managers often deliberately incur additional costs for advertising and product modifications.
Cost management is not practiced in isolation. It is an integral part of general management strategies and their implementation. Examples include programs that enhance customer satisfaction and quality as well as programs that promote new product development. Many cost management concepts are inevitably intertwined with manufacturing and production concepts, such as lean accounting, value chain analysis, throughput accounting, theory of constraints, etc.
The belts are permeable to allow the extrusion of the liquid there through. The mixture is first placed on the lower belt, then "sandwiched" between the upper and lower belts. The belts and the retained mixture then pass through a wedge section where the mixture is evenly distributed between the belts, and an initial volume of liquid is removed. The belts then pass through a series of progressively smaller diameter rollers where the retained mixture is compressed for further liquid removal. The belts are then separated and the dry "cake" is removed from the belts, generally by a scraping apparatus. The belts then pass through one or more belt washers, after which the process is repeated. The liquid from the extrusion and belt washing processes are collected for disposal.
Digesters are only marginally effective at reducing problems with odors, pathogens and greenhouse gas emissions from animal waste or sewage sludge, but they are incapable of making any chemical contaminants in the wastes go away. Digesters aren't emissions-free. They are known to emit nitrogen and sulfur oxides, particulate matter, carbon monoxide and ammonia.
In business transactions, the due diligence process varies for different types of companies. The relevant areas of concern may include the financial, legal, labor, tax, environment and market/commercial situation of the company. Other areas include intellectual property, real and personal property, insurance and liability coverage, debt instrument review, employee benefits and labor matters, immigration, and international transactions
1. Cost-benefit analysis. A form of economic evaluation in which input is measured in terms of dollar costs and output is measured in terms of economic benefit of a project as compared to the incurred cost of the project.
2. Cost-effectiveness analysis. A comparison study between the cost of an improvement (initial plus maintenance) and the benefits it provides. The latter may be derived from collisions reduced, travel time reduced, or increased volume of usage and translated into equivalent dollars saved." (http://safety.fhwa.dot.gov/xings/07010/appena.htm)
The main types of continuous effluent application systems are: sump and gravity flow (generally through a moveable hose); sump, pump and moveable sprinkler; and sump and effluent tanker.
To protect pumps and to avoid pipe blockages, each of the systems needs a stone trap, screen, or trafficable solids trap to remove coarse solids and foreign material from the effluent stream before it enters the sump.
Advantages: Higher effluent nutrient concentrations compared to effluent treated in a pond where nitrogen is volatilised (released as ammonia gas) and sludge settles out. Solid wastes are applied to land with the liquid effluent; so there is no pond sludge to deal with. Low capital cost for sump and gravity flow system, however, medium-to-high capital cost for pump and sprinkler and effluent tanker systems.
Disadvantages: Regular shifting of sprinkler/hose required (every couple of days) to avoid overloading land areas with nutrients and salts. Limited storage capacity (often less than one-day's wash-down, flushing and hosing volume) means that effluent is inevitably applied to wet land during periods of prolonged wet weather. This can potentially result in contaminated runoff entering watercourses and/or leaching into groundwater. Limited area over which effluent can be applied. Effluent application area must be well removed from watercourses. Sump-pump systems require a reliable pump capable of handling high levels of solids. Pump breakdowns have to be fixed very promptly because of very limited storage capacity. No scope for recycling of effluent for yard flushing purposes. Tankers may be difficult or impossible to operate in wet conditions on most soil types.
Treatment and storage systems employ one or more ponds (generally one or two) to treat the daily inflow of effluent from the milking shed and yards and to store both the liquid effluent and solids (sludge) that settle out of the effluent. Pond systems can also collect, treat and store runoff from concrete and earth yards, and in some cases, feed pads and regularly used laneways. The liquid effluent is stored until it is either irrigated onto crop or pasture, or recycled for yard flushing purposes.
A number of effluent ponds may be constructed in series to treat and store dairy effluent. The first pond in such a series is generally referred to as the primary pond and the second pond as the secondary pond. While systems employing three or more ponds are uncommon, the third pond may be referred to as a tertiary pond. The quality of the treated effluent in the final pond generally improves as the number of ponds in the effluent management system increases. Sludge accumulates in the primary pond and is removed at regular intervals. Primary ponds are commonly designed to store between one and ten years' accumulated sludge. The sludge storage capacity generally depends on the intended method of sludge removal.
Regardless of the number of ponds in the effluent management system, the following three storage / treatment volume components must be provided: active treatment volume - to maintain the necessary bacterial population to treat and break down the organic matter in the effluent stream; sludge storage volume - to store the solids that settle out of the effluent during treatment; and wet weather storage volume - to store liquid effluent during periods when the land is too wet for effluent irrigation, or until the timing of effluent irrigation suits other farm management considerations.
The modern distribution system begins as the primary circuit leaves the sub-station and ends as the secondary service enters the customer's meter socket. A variety of methods, materials, and equipment are used among the various utility companies, but the end result is similar. First, the energy leaves the sub-station in a primary circuit, usually with all three phases.
The most common type of primary is known as a wye configuration (so named because of the shape of a "Y".) The wye configuration includes 3 phases (represented by the three outer parts of the "Y") and a neutral (represented by the center of the "Y".) The neutral is grounded both at the substation and at every power pole. In a typical 12470Y/7200 volt system, the pole mount transformer's primary winding is rated for 7200 volts and is connected across one phase of power and the neutral. The primary and secondary (low voltage) neutrals are bonded (connected) together to provide a path to blow the primary fuse if any fault occurs that allows primary voltage to enter the secondary lines. An example of this type of fault would be a primary phase falling across the secondary lines. Another example would be some type of fault in the transformer itself.
The other type of primary configuration is known as delta. This method is older and less common. Delta is so named because of the shape of the Greek letter delta, a triangle. Delta has only 3 phases and no neutral. In delta there is only a single voltage, between two phases (phase to phase), while in wye there are two voltages, between two phases and between a phase and neutral (phase to neutral).
Wye primary is safer because if one phase becomes grounded, that is, makes connection to the ground through a person, tree, or other object, it should trip out the fused cutout similar to a household circuit breaker tripping. In delta, if a phase makes connection to ground it will continue to function normally. It takes two or three phases to make connection to ground before the fused cutouts will open the circuit. The voltage for this configuration is usually 4800 volts. Transformers are sometimes used to step down from 7200 or 7600 volts to 4800 volts or to step up from 4800 volts to 7200 or 7600 volts. When the voltage is stepped up, a neutral is created by bonding one leg of the 7200/7600 side to ground. This is commonly used to power single phase underground services or whole housing developments that are built in 4800 volt delta distribution areas. Step downs are used in areas that have been upgraded to a 7200/12500Y or 7600/13200Y and the power company chooses to leave a section as a 4800 volt setup. Sometimes power companies choose to leave sections of a distribution grid as 4800 volts because this setup is less likely to trip fuses or reclosers in heavily wooded areas where trees come into contact with lines.
Current environmental problems have evolved into a complex set of interdisciplinary issues involving ecological, political, economic, social, as well as physical and biological considerations. Modern environmental studies must include the study of the urban environment as well as the natural environment.
Except for the use of fire, the fermentation of sugar into ethanol is very likely the earliest organic reaction known to humanity, and the intoxicating effects of ethanol consumption have been known since ancient times. In modern times, ethanol intended for industrial use is also produced from byproducts of petroleum refining.
Ethanol has widespread use as a solvent of substances intended for human contact or consumption, including scents, flavorings, colorings, and medicines. In chemistry, it is both an essential solvent and a feedstock for the synthesis of other products. It has a long history as a fuel for heat and light and also as a fuel for internal combustion engines.
Combustion of ethanol in an internal combustion engine yields many of the products of incomplete combustion that are produced by gasoline and significantly larger amounts of formaldehyde and related species such as formalin, acetaldehyde, etc. This leads to a significantly larger photochemical reactivity that generates much more ground level ozone. This data has been assembled into The Clean Fuels Report comparison of fuel emissions and shows that ethanol exhaust generates 2.14 times as much ozone as does gasoline exhaust. When this is added into the custom "Localized Pollution Index (LPI)" of The Clean Fuels Report the local pollution, i.e. that which contributes to smog, is 1.7 on a scale where gasoline is 1.0 and higher numbers signify greater pollution. This issue has been formalized by the California Air Resources Board in 2008 by recognizing control standards for formaldehydes et al as an emissions control group much like the conventional NOx and Reactive Organic Gases (ROGs).
The largest single use of ethanol is as a motor fuel and fuel additive. The largest national fuel ethanol industries exist in Brazil (gasoline sold in Brazil contains at least 25% ethanol and anhydrous ethanol is also used as fuel in more than 90% of new cars sold in the country). The Brazilian production of ethanol is praised for the high carbon sequestration capabilities of the sugar cane plantations, thus making it a real option to combat climate change.
Henry Ford designed the first mass-produced automobile, the famed Model T Ford, to run on pure anhydrous (ethanol) alcohol -- he said it was "the fuel of the future". Today, however, 100% pure ethanol is not approved as a motor vehicle fuel in the US. Added to gasoline, ethanol reduces ground-level ozone formation by lowering volatile organic compound and hydrocarbon emissions, decreasing carcinogenic benzene, and butadiene, emissions, and particulate matter emissions from gasoline combustion.
Prior to the development of electronic fuel injection (EFI) and computerized engine management, the lower energy content of ethanol required that the engine carburetor be rejetted to permit a larger volume of fuel to mix with the intake air. EFI is able to actively compensate for varying fuel energy densities by monitoring the oxygen content of exhaust gases. However, a standard EFI gasoline engine can typically only tolerate up to 10% ethanol and 90% gasoline. Higher ethanol ratios require either larger-volume fuel injectors or an increase in fuel rail pressure to deliver the greater liquid volume needed to equal the energy content of pure gasoline.
Today, more than 20% of the Brazilian fleet of cars on the streets are able to use 100% ethanol as fuel, which includes ethanol-only engines and flex-fuel engines. Flex-fuel engines in Brazil are able to work with all ethanol, all gasoline or any mixture of both. In the US flex-fuel vehicles can run on 0% to 85% ethanol (15% gasoline) since higher ethanol blends are not yet allowed. Brazil supports this population of ethanol-burning automobiles with large national infrastructure that produces ethanol from domestically grown sugar cane. Sugar cane not only has a greater concentration of sucrose than corn (by about 30%), but is also much easier to extract. The bagasse generated by the process is not wasted, but is utilized in power plants as a surprisingly efficient fuel to produce electricity.
World production of ethanol in 2006 was 51 gigalitres (13,000,000,000 US gal), with 69% of the world supply coming from Brazil and the United States.
The United States fuel ethanol industry is based largely on corn. According to the Renewable Fuels Association, as of October 30, 2007, 131 grain ethanol bio-refineries in the United States have the capacity to produce 7.0 billion US gallons (26 GL) of ethanol per year. An additional 72 construction projects underway (in the U.S.) can add 6.4 billion gallons of new capacity in the next 18 months. Over time, it is believed that a material portion of the ~150 billion gallon per year market for gasoline will begin to be replaced with fuel ethanol.
The Energy Policy Act of 2005 requires that 4 billion gallons of "renewable fuel" be used in 2006 and this requirement will grow to a yearly production of 7.5 billion gallons by 2012.
In the United States, ethanol is most commonly blended with gasoline as a 10% ethanol blend nicknamed "gasohol". This blend is widely sold throughout the U.S. Midwest, and in cities required by the 1990 Clean Air Act to oxygenate their gasoline during the winter.
A feasibility study is an important part of creating a business plan for a new enterprise, since it has been estimated that only one idea in fifty is commercially viable. If a project is seen to be feasible from the results of the study, the next logical step is to proceed with it. The research and information uncovered in the feasibility study will support the detailed planning and reduce the research.
Heating, ventilating, and air conditioning is based on the basic principles of thermodynamics, fluid mechanics, and heat transfer, and two inventions and discoveries made by Michael Faraday, Willis Carrier, Reuben Trane, James Joule, William Rankine, Sadi Carnot, and many others. The invention of the components of HVAC systems goes hand-in-hand with the industrial revolution, and new methods of modernization, higher efficiency, and system control are constantly introduced by companies and inventors all over the world.
The three functions of heating, ventilating, and air-conditioning are closely interrelated. All seek to provide thermal comfort, acceptable indoor air quality, and reasonable installation, operation, and maintenance costs. HVAC systems can provide ventilation, reduce air infiltration, and maintain pressure relationships between spaces. How air is delivered to, and removed from spaces is known as room air distribution.
In modern buildings the design, installation, and control systems of these functions are integrated into one or more HVAC systems. For very small buildings, contractors normally "size" and select HVAC systems and equipment. For larger buildings where required by law, "building services" designers and engineers, such as mechanical, architectural, or building services engineers analyze, design, and specify the HVAC systems, and specialty mechanical contractors build and commission them. In all buildings, building permits and code-compliance inspections of the installations are the norm.
The HVAC industry is a worldwide enterprise, with career opportunities including operation and maintenance, system design and construction, equipment manufacturing and sales, and in education and research. The HVAC industry had been historically regulated by the manufacturers of HVAC equipment, but Regulating and Standards organizations such as ASHRAE, SMACNA, ACCA, Uniform Mechanical Code, International Mechanical Code, and AMCA have been established to support the industry and encourage high standards and achievement.
Whereas most engineering disciplines apply skills to very specific areas, industrial engineering is applied in virtually every industry. Examples of where industrial engineering might be used include shortening lines (or queues) at a theme park, streamlining an operating room, distributing products worldwide (also referred to as Supply Chain Management), and manufacturing cheaper and more reliable automobiles. Industrial engineers typically use computer simulation, especially discrete event simulation, for system analysis and evaluation.
The name "industrial engineer" can be misleading. While the term originally applied to manufacturing, it has grown to encompass services and other industries as well. Similar fields include Operations Research, Management Science, Financial Engineering, Supply Chain, Manufacturing Engineering, Engineering Management, Overall Equipment Effectiveness, Systems Engineering, Ergonomics, Process Engineering, Value Engineering and Quality Engineering.
There are a number of things industrial engineers do in their work to make processes more efficient, to make products more manufacturable and consistent in their quality, and to increase productivity.
Primary treatment also typically includes a sand or grit channel or chamber where the velocity of the incoming wastewater is carefully controlled to allow sand grit and stones to settle, while keeping the majority of the suspended organic material in the water column. This equipment is called a detritor or sand catcher. Sand, grit, and stones need to be removed early in the process to avoid damage to pumps and other equipment in the remaining treatment stages. Sometimes there is a sand washer (grit classifier) followed by a conveyor that transports the sand to a container for disposal. The contents from the sand catcher may be fed into the incinerator in a sludge processing plant, but in many cases, the sand and grit is sent to a landfill.
The patent assessment begins by reviewing the company's pending patent applications and issued patents that cover its technology or, in the case of companies with large portfolios, focus on the company's core technology or an important core sector. The object of this assessment is not to place a monetary value on the portfolio. The purpose instead is first and foremost to review the pending patent applications and issued patents to ensure that the company's core technology is covered as extensively as is practical given the size and market share of the company and its financial resources.
The company's value can be further enhanced by licensing revenue. Patent applications and patents can be used to create a revenue stream for the company. Even if the technology developed is not one that the company intends to commercialize itself, filing a proper patent portfolio may enable the company to derive significant revenue by licensing that portfolio, even to competitors or quasi-competitors.
In the business method patent area, licensing a competitor may be quite acceptable from a competitive standpoint because frequently such patent applications and patents do not contain confidential information. Therefore deriving revenue from one's competitors is acceptable, whereas it would be unacceptable if the company's core product or core process were licensed.
The term "manure" was used for inorganic fertilizers in the past, but this usage is now very rare. There are two main classes of manures in soil management: green manures and animal manures. Compost is distinguished from manure in that it is the decomposed remnants of organic materials (which may, nevertheless, include manure).
Most animal manure is feces — excrement of plant-eating mammals (herbivores) and poultry — or plant material (often straw) which has been used as bedding for animals and thus is heavily contaminated with their feces and urine. Green manures are crops grown for the express purpose of plowing them under. In so doing, fertility is increased through the nutrients and organic matter that are returned to the soil. Leguminous crops, such as clover, also "fix" nitrogen through rhizobia bacteria in specialized nodes in the root structure. Other types of plant matter used as manure or fertilizer include: the contents of the rumens of slaughtered ruminants; spent hops left over from making beer. The dried manure of animals has been used as fuel throughout history. Dried manure (usually known as dung) of cow was, and still is, an important fuel source in countries such as India, while camel dung may be used in treeless regions such as deserts. On the Oregon Trail, pioneering families collected large quantities of "buffalo chips" in lieu of scarce firewood. It has been used for many purposes, in cooking fires and to combat the cold desert nights.
Another use of manure is to make paper; this has been done with dung from elephants where it is a small industry in Africa and Asia, and also horses, llamas, and kangaroos. Other than the llama, these animals are not ruminants and thus tend to pass plant fibres undigested in their dung.
Manure generates heat as it decomposes, and it is not unheard of for manure to ignite spontaneously should it be stored in a massive pile. Once such a large pile of manure is burning, it will foul the air over a very large area and require considerable effort to extinguish. Large feedlots must therefore take care to ensure that piles of fresh manure (faeces) do not get excessively large.
There is no serious risk of spontaneous combustion in smaller operations. Manure generates heat as it decomposes, and it is not unheard of for manure to ignite spontaneously should it be stored in a massive pile. Once such a large pile of manure is burning, it will foul the air over a very large area and require considerable effort to extinguish. Large feedlots must therefore take care to ensure that piles of fresh manure (faeces) do not get excessively large. There is no serious risk of spontaneous combustion in smaller operations.
When fresh sewage or wastewater is added to a settling tank, approximately 50% of the suspended solid matter will settle out in about an hour and a half. This collection of solids is known as raw sludge or primary solids and is said to be "fresh" before anaerobic processes become active. Once anaerobic bacteria take over, the sludge will become putrescent in a short time and must be removed from the sedimentation tank before this happens. This is commonly accomplished two ways. In an Imhoff tank, fresh sludge is passed through a slot to the lower story or digestion chamber where it is decomposed by anaerobic bacteria, resulting in liquefaction and reduced volume of the sludge. After digesting for an extended period, the result is called "digested" sludge and may be disposed of by drying and then land filling.
Alternately, the fresh sludge may be continuously extracted from the tank mechanically and passed to separate sludge digestion tanks that operate at higher temperatures than the lower story of the Imhoff tank and, as a result, digest much more rapidly and efficiently.
Excess solids from biological processes such as activated sludge can be referred to as sludge, although more often called "biosolids," a public relations term that is increasingly used by water professionals in the United States. Industrial wastewater solids are also referred to as sludge, whether generated from biological or physical-chemical processes. Surface water plants also generate sludge made up of solids removed from the raw water.
There are five broad categories of MSW:
Biodegradable waste: food and kitchen waste, green waste, paper (can also be recycled).
Recyclable material: paper, glass, bottles, cans, metals, certain plastics, etc.
Inert waste: construction and demolition waste, dirt, rocks, debris.
Composite wastes: waste clothing, Tetra Paks, waste plastics such as toys.
Domestic hazardous waste (also called "household hazardous waste") & toxic waste: medication, e-waste, paints, chemicals, light bulbs, fluorescent tubes, spray cans, fertilizer and pesticide containers, batteries, shoe polish.
Balancing the flow in pipeline systems. This is performed by mainline transmission pipeline companies to maintain operational integrity of the pipelines, by ensuring that the pipeline pressures are kept within design parameters.
Maintaining contractual balance. Shippers use stored gas to maintain the volume they deliver to the pipeline system and the volume they withdraw. Without access to such storage facilities, any imbalance situation would result in a hefty penalty.
Leveling production over periods of fluctuating demand. Producers use storage to store any gas that is not immediately marketable, typically over the summer when demand is low and deliver it when in the winter months when the demand is high.
Market speculation. Producers and marketers use gas storage as a speculative tool, storing gas when they believe that prices will increase in the future and then selling it when it does reach those levels.
Insuring against any unforeseen accidents. Gas storage can be used as an insurance that may affect either production or delivery of natural gas. These may include natural factors such as hurricanes, or malfunction of production or distribution systems.
Meeting regulatory obligations. Gas storage ensures to some extent the reliability of gas supply to the end consumer at the lowest cost, which is what the regulatory body is there to ensure. This is a reason why the regulatory body is constantly monitoring storage inventory levels across the board.
Reducing price volatility. Gas storage ensures commodity liquidity at the market centers. This helps contain natural gas price volatility and uncertainty.
Offsetting changes in natural gas demands. Gas storage facilities are gaining more importance due changes in natural gas demands. First, traditional supplies that were once relied upon to meet the winter peak demand are now unable to keep up. Second, there is a growing summer peak demand on natural gas, due to electric generation via gas fired power plants.
To identify low-cost O&M solutions for improving energy efficiency, comfort, and indoor air quality (IAQ)
To reduce premature equipment failure
To insure optimal equipment performance
To obtain an understanding of current O&M and PM practices and O&M documentation
O&M assessments may be performed as a stand-alone activity that results in a set of O&M recommendations or as part of retro-commissioning (a larger more holistic approach to improving existing-building performance).
The goal of the assessment is to gain an understanding of how building systems and equipment are currently operated and maintained, why these O&M strategies were chosen, and what the most significant problems are for building staff and occupants. Implementing O&M changes without fully understanding the owner’s operational needs can have disappointing and even disastrous effects. Most projects require the development of a formal assessment instrument in order to obtain all the necessary O&M information. This instrument includes a detailed interview with the facility manager, building operators and maintenance service contractors who are responsible for the administration and implementation of the O&M program.
Depending on the scope of the project it may also include an in-depth site survey of equipment condition and gathering of nameplate information. Sample assessment forms are presented in Appendix A. An O&M assessment can take from a few days to several weeks to complete depending on the objectives and scope of the project. The assessment identifies the best opportunities for optimizing the energy-using systems and improving O&M practices. It provides the starting point for evaluating the present O&M program and a basis for understanding which O&M improvements are most cost effective to implement.
During early project stages, the Owner's Engineer: defines the project scope aligned with business objectives; performs market forecasts and competitive assessments; prepares capital budgets; identifies and evaluates technological alternatives; scopes additional technology development as required; prepares owner's design criteria for use in detailed engineering; prepares project scheduling and cash flow projections; identifies risks, judges importance, ranks and assigns mitigation responsibility within the Owner's organization; and prepares front-end engineering design (FEED), allowing the contractor to provide firm costs and schedules while minimizing change orders.
According to Frank Herzog, vice president of sales and marketing, paper machinery for Voith Paper, Appleton, Wisconsin, USA, there are multiple signs to identify that a paper machine needs a rebuild. "From the business perspective, manufacturing cost and product quality are vital to papermaking," he said. "Key drivers to rebuild and modernize the "aging" fleet of paper machines in North America include the following: need to improve current productivity driven by market and cost structure; need to improve current quality driven by market and cost structure; and sustain competitive position in regard to productivity, quality, and manufacturing cost of existing line."
Herzog added that another way to categorize these drivers and projects is by business needs including cost justification and ROI. These needs include product strategy, cost reduction, maintenance, and meeting government regulations. According to Herzog, specific examples when a machine needs a rebuild (and the equipment that should be added) include: poor CD basis weight profiles and high fiber angles (dilution headbox); poor formation (upgrade to hybrid or gap former); dryer limitations (shoe presses); runnability issues in dryer section (sheet stabilization thru uni-run or signle tier sections); upgrade to value added grades (coating and calendaring); larger, better wound parent rolls (center wind reels); and reduced basis weights/lower quality furnish will require in most cases a rebuild (entire machine would require a review).
Pentti Rautiainen, general manager, paper technology, paper business line for Metso Paper, Helsini, Finland noted that the target of an effective rebuild should be "operational excellence" compared to competing paper machines. "Operational excellence typically entails high production rates (high speed and efficiency) and the effective use of production inputs," he said.
Most successful rebuilds have similar characteristics, according to Voith Paper’s Herzog. "The early involvement of all parties involved in a rebuild is a key success factor in executing capital projects," he said. Beauchesne of GL&V added that since energy costs are increasing, some producers are looking more closely at press rebuilds. Shoe pressing can yield high dryness values and improve production costs. Press rebuilds can also improve runnability by reducing or eliminating long open draws, he concluded. (Rooks, Alan. "Great Rebuilds: A Practical Guide." Solutions! March 2004.
Electrostatic precipitators are highly efficient filtration devices that minimally impede the flow of gases through the device, and can easily remove fine particulate matter such as dust and smoke from the air stream.
ESP’s are excellent devices for control of many industrial particulate emissions, including smoke from electricity-generating utilities (coal and oil fired), salt cake collection from black liquor boilers in pulp mills, and catalyst collection from fluidized bed catalytic cracker units in oil refineries to name a few. These devices treat gas volumes from several hundred thousand ACFM to 2.5 million ACFM (1,180 m³/s) in the largest coal-fired boiler applications.
The original parallel plate–weighted wire design (described above) has evolved as more efficient (and robust) discharge electrode designs were developed, today focusing on rigid discharge electrodes to which many sharpened spikes are attached, maximizing corona production. Transformer-rectifier systems apply voltages of 50–100 kilovolts at relatively high current densities. Modern controls minimize sparking and prevent arcing, avoiding damage to the components. Automatic rapping systems and hopper evacuation systems remove the collected particulate matter while on line, theoretically allowing ESPs to stay in operation for years at a time.
Electrostatic precipitation is typically a dry process, but spraying moisture to the incoming air flow helps collect the exceptionally fine particulates, and helps reduce the electrical resistance of the incoming dry material to make the process more effective. A wet electrostatic precipitator (WESP) merges the operational methods of a wet scrubber with an electrostatic precipitator to make a self-washing, self-cleaning yet still high-voltage device.
In chemical engineering terms, a process is typically a set of equipment arranged, controlled, and operated in a particular way, to produce a product. The product must meet certain specifications, such as a certain production rate, product quality, and cost. Typically, specifications are supplied as a range of values, such as "Must be between 84% and 87% octane", or "must cost less than $250 per ton."
When optimizing a process, the goal is to maximize one or more of the process specifications, while keeping all others within their constraints.
Fundamentally, there are three parameters that can be adjusted to affect optimal performance. These are:
Equipment optimization. The first step is to verify that the existing equipment is being used to its fullest advantage by examining operating data to identify equipment bottlenecks.
Operating procedures. Operating procedures may vary widely from person-to-person or from shift-to-shift. Automation of the plant can help significantly. But automation will be of no help if the operators take control and run the plant in manual.
Control optimization. In a typical processing plant, such as a chemical plant or oil refinery, there are hundreds or even thousands of control loops. Each control loop is responsible for controlling one part of the process, such as maintaining a temperature, level, or flow. If the control loop is not properly designed and tuned, the process runs below its optimum. The process will be more expensive to operate, and equipment will wear out prematurely. For each control loop to run optimally, identification of sensor, valve, and tuning problems is important. It has been well documented that over 35% of control loops typically have problems.
The process of continuously monitoring and optimizing the entire plant is sometimes called performance supervision.
The primary challenge of project management is to achieve all of the project goals and objectives while adhering to classic project constraints—usually scope, quality, time and budget. The secondary—and more ambitious—challenge is to optimize the allocation and integration of inputs necessary to meet pre-defined objectives. A project is a carefully defined set of activities that use resources (money, people, materials, energy, space, provisions, communication, motivation, etc.) to achieve the project goals and objectives.
Like any human undertaking, projects need to be performed and delivered under certain constraints. Traditionally, these constraints have been listed as "scope," "time," and "cost" (Chatfield, Carl. "A short course in project management", Microsoft.). These are also referred to as the "Project Management Triangle," where each side represents a constraint. One side of the triangle cannot be changed without affecting the others. A further refinement of the constraints separates product "quality" or "performance" from scope, and turns quality into a fourth constraint.
The Project Management Triangle. The time constraint refers to the amount of time available to complete a project. The cost constraint refers to the budgeted amount available for the project. The scope constraint refers to what must be done to produce the project's end result. These three constraints are often competing constraints: increased scope typically means increased time and increased cost, a tight time constraint could mean increased costs and reduced scope, and a tight budget could mean increased time and reduced scope.
The discipline of project management is about providing the tools and techniques that enable the project team (not just the project manager) to organize their work to meet these constraints.
Another approach to project management is to consider the three constraints as finance, time and human resources. If you need to finish a job in a shorter time, you can throw more people at the problem, which in turn will raise the cost of the project, unless by doing this task quicker we will reduce costs elsewhere in the project by an equal amount.
Time. For analytical purposes, the time required to produce a deliverable is estimated using several techniques. One method is to identify tasks needed to produce the deliverables documented in a work breakdown structure or WBS. The work effort for each task is estimated and those estimates are rolled up into the final deliverable estimate. The tasks are also prioritized, dependencies between tasks are identified, and this information is documented in a project schedule. The dependencies between the tasks can affect the length of the overall project (dependency constrained), as can the availability of resources (resource constrained). Time is not considered a cost nor a resource since the project manager cannot control the rate at which it is expended. This makes it different from all other resources and cost categories. It should be remembered that no effort expended will have any higher quality than that of the effort- expenders.
Cost. Cost to develop a project depends on several variables including (chiefly): resource costs, labor rates, material rates, risk management (i.e. cost contingency), Earned value management, plant (buildings, machines, etc.), equipment, cost escalation, indirect costs, and profit. But beyond this basic accounting approach to fixed and variable costs, the economic cost that must be considered includes worker skill and productivity which is calculated by variation to project cost estimates. This is important when companies hire temporary or contract employees or outsource work.
Scope. Requirements specified to achieve the end result. The overall definition of what the project is supposed to accomplish, and a specific description of what the end result should be or accomplish. A major component of scope is the quality of the final product. The amount of time put into individual tasks determines the overall quality of the project. Some tasks may require a given amount of time to complete adequately, but given more time could be completed exceptionally. Over the course of a large project, quality can have a significant impact on time and cost (or vice versa).
Together, these three constraints have given rise to the phrase "On Time, On Spec, On Budget." In this case, the term "scope" is substituted with "spec(ification)."
Mechanical pulps retain most of the lignin present in the wood used to make the pulp and thus contain almost as much lignin as they do cellulose and hemicellulose. It would be impractical to remove this much lignin by bleaching, and undesirable since one of the big advantages of mechanical pulp is the high yield of pulp based on wood used. Therefore the objective of bleaching mechanical pulp (also referred to as brightening) is to remove only the chromophores (color-causing groups). This is possible because the structures responsible for color are also more susceptible to oxidation or reduction.
Alkaline hydrogen peroxide is the most commonly used bleaching agent for mechanical pulp. The amount of base such as sodium hydroxide is less than that used in bleaching chemical pulps and the temperatures are lower. These conditions allow alkaline peroxide to selectively oxidize non-aromatic conjugated groups responsible for absorbing visible light. The decomposition of hydrogen peroxide is catalyzed by transition metals, and iron, manganese and copper are of particular importance in pulp bleaching. The use of chelating agents like EDTA to remove some of these metal ions from the pulp prior to adding peroxide allows the peroxide to be used more efficiently. Magnesium salts and sodium silicate are also added to improve bleaching with alkaline peroxide.
Sodium dithionite (Na2S2O4), also known as sodium hydrosulfite, is the other main reagent used to brighten mechanical pulps. In contrast to hydrogen peroxide, which oxidizes the chromophores, dithionite reduces these color-causing groups. Dithionite reacts with oxygen, so efficient use of dithionite requires that oxygen exposure be minimized during its use.
Chelating agents can contribute to brightness gain by sequestering iron ions, for example as EDTA complexes, which are less colored than the complexes formed between iron and lignin. The brightness gains achieved in bleaching mechanical pulps are temporary since almost all of the lignin present in the wood is still present in the pulp. Exposure to air and light can produce new chromophores from this residual lignin. This is why newspaper yellows as it ages.
Chemical pulps, such as those from the kraft process or sulfite pulping, contain much less lignin than mechanical pulps, (<5% compared to approximately 40%). The goal in bleaching chemical pulps is to remove essentially all of the residual lignin, hence the process is often referred to as delignification. Sodium hypochlorite (household bleach) was initially used to bleach chemical pulps, but was largely replaced in the 1930s by chlorine. Concerns about the release of organochlorine compounds into the environment prompted the development of Elemental Chlorine Free (ECF) and Totally Chlorine Free (TCF) bleaching processes.
Pumping stations in sewage collection systems, also called lift stations, are normally designed to handle raw sewage that is fed from underground gravity pipelines (pipes that are laid at an angle so that a liquid can flow in one direction under gravity). Sewage is fed into and stored in an underground pit, commonly known as a wet well. The well is equipped with electrical instrumentation to detect the level of sewage present. When the sewage level rises to a predetermined point, a pump will be started to lift the sewage upward through a pressurized pipe system called a sewer force main from where the sewage is discharged into a gravity manhole. From here the cycle starts all over again until the sewage reaches its point of destination – usually a treatment plant. By this method, pumping stations are used to move waste to higher elevations. In the case of high sewage flows into the well (for example during peak flow periods and wet weather) additional pumps will be used. If this is insufficient, or in the case of failure of the pumping station, a backup in the sewer system can occur, leading to a sanitary sewer overflow – the discharge of raw sewage into the environment.
Sewage pumping stations are typically designed so that one pump or one set of pumps will handle normal peak flow conditions. Redundancy is built into the system so that in the event that any one pump is out of service, the remaining pump or pumps will handle the designed flow. The storage volume of the wet well between the 'pump on' and 'pump off' settings is designed to minimize pump starts and stops, but is not so long a detention time as to allow the sewage in the wet well to go septic.
The interior of a sewage pump station is a very dangerous place. Poisonous gases such as methane and hydrogen sulfide can accumulate in the wet well; an ill-equipped person entering the well would be overcome by fumes very quickly. Any entry into the wet well requires the correct confined space entry method for a hazardous environment. To minimize the need for entry, the facility is normally designed to allow pumps and other equipment to be removed from outside the wet well.
Expediting specifically refers to "the 'rushing' or 'chasing' of production or purchase orders which are needed in less than the normal lead time.
The membranes used for reverse osmosis have a dense barrier layer in the polymer matrix where most separation occurs. In most cases the membrane is designed to allow only water to pass through this dense layer while preventing the passage of solutes (such as salt ions). This process requires that a high pressure be exerted on the high concentration side of the membrane, usually 2–17 bar (30–250 psi) for fresh and brackish water, and 40–70 bar (600–1000 psi) for seawater, which has around 24 bar (350 psi) natural osmotic pressure which must be overcome.
This process is best known for its use in desalination (removing the salt from sea water to get fresh water), but it has also been used to purify fresh water for medical, industrial and domestic applications since the early 1970s.
When two solutions with different concentrations of a solute are mixed, the total amount of solutes in the two solutions will be equally distributed in the total amount of solvent from the two solution. Instead of mixing the two solutions together, they can be put in two compartments where they are separated from each other by a semipermeable membrane. The semipermeable membrane does not allow the solutes to move from one compartment to the other, but allows the solvent to move. Since equilibrium cannot be achieved by the movement of solutes from the compartment with high solute concentration to the one with low solute concentration, it is instead achieved by the movement of the solvent from areas of low solute concentration to areas of high solute concentration. When the solvent moves away from low concentration areas, it causes these areas to become more concentrated. On the other side, when the solvent moves into areas of high concentration, solute concentration will decrease. This process is termed osmosis. The tendency for solvent to flow through the membrane can be expressed as "osmotic pressure", since it is analogous to flow caused by a pressure differential.
In reverse osmosis, in a similar setup as that in osmosis, pressure is applied to the compartment with high concentration. In this case, there are two forces influencing the movement of water: the pressure caused by the difference in solute concentration between the two compartments (the osmotic pressure) and the externally applied pressure.
Headquartered in Milwaukee, Wisconsin, the company employs about 20,000 people serving customers in more than 80 countries. Rockwell Automation spun off from Rockwell International in 2001 and retained Entek. From there, Rockwell Automation went through a series of acquisitions, particularly Propack Data (now Rockwell Automation Solutions GmbH) in 2002, DataSweep in 2005, GEPA in 2006,ICS Triplex and Pavilion Technologies in 2007, and Incuity in 2008. On January 31, 2007, Rockwell Automation sold off their PowerSystems Division which consists of Dodge mechanical and Reliance Electric motors with headquarters in Greenville, South Carolina to Baldor Electric Company.
Budgeting refers to refers to a list of all planned expenses and revenues. A budget is an important concept in microeconomics, which uses a budget line to illustrate the trade-offs between two or more goods. In other terms, a budget is an organizational plan stated in monetary terms.
In summary, the purpose of budgeting is to: provide a forecast of revenues and expenditures i.e. construct a model of how our business might perform financially speaking if certain strategies, events and plans are carried out; and enable the actual financial operation of the business to be measured against the forecast.
A wet scrubber is used to clean air or other gases of various pollutants and dust particles. Wet scrubbing works via the contact of target compounds or particulate matter with the scrubbing solution. Solutions may simply be water (for dust) or complex solutions of reagents that specifically target certain compounds.
Removal efficiency of pollutants is improved by increasing residence time in the scrubber or by the increase of surface area of the scrubber solution by the use of a spray nozzle, packed towers or an aspirator. Wet scrubbers will often significantly increase the proportion of water in waste gases of industrial processes which can be seen in a stack plume.
A dry or semi-dry scrubbing system, unlike the wet scrubber, does not saturate the flue gas stream that is being treated with moisture. In some cases no moisture is added; while in other designs only the amount of moisture that can be evaporated in the flue gas without condensing is added. Therefore, dry scrubbers do not have a stack steam plume or wastewater handling/disposal requirements. Dry scrubbing systems are used to remove acid gases (such as SO2 and HCl) primarily from combustion sources.
There are a number of dry type scrubbing system designs. However, all consist of two main sections or devices: a device to introduce the acid gas sorbent material into the gas stream and a particulate matter control device to remove reaction products, excess sorbent material as well as any particulate matter already in the flue gas.
Dry scrubbing systems can be categorized as dry sorbent injectors (DSIs) or as spray dryer absorbers (SDA’s). Spray dryer absorbers are also called semi-dry scrubbers or spray dryers.
Dry sorbent injection involves the addition of an alkaline material (usually hydrated lime or soda ash) into the gas stream to react with the acid gases. The sorbent can be injected directly into several different locations: the combustion process, the flue gas duct (ahead of the particulate control device), or an open reaction chamber (if one exists). The acid gases react with the alkaline sorbents to form solid salts which are removed in the particulate control device. These simple systems can achieve only limited acid gas (SO2 and HCl) removal efficiencies. Higher collection efficiencies can be achieved by increasing the flue gas humidity (i.e., cooling using water spray). These devices have been used on medical waste incinerators and a few municipal waste combustors.
In spray dryer absorbers, the flue gases are introduced into an absorbing tower (dryer) where the gases are contacted with a finely atomized alkaline slurry. Acid gases are absorbed by the slurry mixture and react to form solid salts which are removed by the particulate control device. The heat of the flue gas is used to evaporate all the water droplets, leaving a non-saturated flue gas to exit the absorber tower. Spray dryers are capable of achieving high (80+%) acid gas removal efficiencies. These devices have been used on industrial and utility boilers and municipal waste combustors.
While the Northwest biodiesel industry will utilize brassica crops for biodiesel feedstock, the industry will be modeled on the soy bean and soy-based biodiesel industries of the Midwest. The soy bean industry was well-established before soy-based biodiesel production began. The soy bean crushing facilities existed and the industry evolved around the use of the soy meal generated by the crushing facilities. The extracted soy oil was not the primary commodity and had relatively low value. In addition, the industry had a surplus of the oil. Biodiesel production was a means to put this oil to use and the cost of the low value oil meant that biodiesel could be produced at a relatively competitive price with petroleum diesel.
Unfortunately, a large-scale, viable oil seed industry for canola and mustard seed does not currently exist in Eastern Washington. This industry must be established in order to create an economically feasible biodiesel industry. The oil seed industry will include developing local oil seed crushing facilities and creating markets for the meal left after the oil has been extracted. Local oil seed crushing is necessary to avoid the costs of shipping the product to and from a distant processing facility. For farmers to grow oil seed crops, they must be able to make enough money from these crops to pay for growing it and encourage them to change from current rotational crops such as peas and lentils to the oil seed crops of mustard and canola. At current yields and commodity prices, the cost of oil seeds to a crusher would result in a cost of the extracted oil that is too high for the production of a competitively priced biodiesel fuel.
In order to reduce the cost of the extracted oil, markets must be established for the meal. The income generated will help to offset the cost of the oil production and has the potential to make the oil the lowest value commodity produced by the crushing facility. Biodiesel production then becomes economically feasible. The mustard and canola meal generated by the crushing facility can be used as soil amendments, soil fumigants, pesticides, herbicides, fertilizers, and food additives for human and animal consumption. The value of the meal for many of these applications is very high, but several steps remain in order to realize this value. For example, use of mustard meal as an herbicide requires testing and labeling by the U.S. Environmental Protection Agency. Feedstock seed oils include grape and flax seed.
Hybrid Solution. PRESS (Precast Seismic Structural Systems) is a U.S./Japan joint program for the development of structural methods suitable for ductile reinforcement and joining of rigid concrete plate and beam structures. As these systems comprise both post tensioning and external energy dissipation they are called hybrid solutions. Fiber reinforced concrete is an essential element in these structures as they allow the creation of structural regions capable of "plastic hinging", a features that helps progressive flexible joint failure without catastrophic dismantling.
Base isolators. Base isolation is a collection of structural elements of a building that should substantially decouple the building's structure from the shaking ground thus protecting the building's integrity and enhancing its seismic performance. This technology, which is a kind of seismic vibration control, can be applied both to a newly designed building and to seismic upgrading of existing structures. Normally, excavations are made around the building and the building is separated from the foundations. Steel or reinforced concrete beams replace the connections to the foundations, while under these, the isolating pads, or base isolators, replace the material removed. While the base isolation tends to restrict transmission of the ground motion to the building, it also keeps the building positioned properly over the foundation. Careful attention to detail is required where the building interfaces with the ground, especially at entrances, stairways and ramps, to ensure sufficient relative motion of those structural elements.
Dampers. Dampers absorb the energy of motion and convert it to heat, thus "damping" resonant effects in structures that are rigidly attached to the ground. In these cases, the threat of damage does not come from the initial shock itself, but rather from the periodic resonant motion of the structure that repeated ground motion induces.
Slosh tanks. A large tank of water may be placed on an upper floor. During a seismic event, the water in this tank will slosh back and forth, but is directed by baffles - partitions that prevent the tank itself becoming resonant; through its mass the water may change or counter the resonant period of the building. Additional kinetic energy can be converted to heat by the baffles and is dissipated through the water - any temperature rise will be insignificant.
Shock absorbers. Shock absorbers, similar to those used in automotive suspensions, may be used to connect portions of a structure that are free to move relative to each other and that may collide during an earthquake. Where a rigid connection could break or impose excessive strain on the buildings, and a loose connection could be dismantled, the shock absorbers allow the relative motion to be restrained by transferring and dissipating energy. This can be especially effective if the two structures have differing fundamental frequencies of resonance, as each structure may then assist in inhibiting the motion of the other.
Tuned mass dampers. Tuned mass dampers employ movable weights on some sort of springs. These are typically employed to reduce wind sway in very tall, light buildings. Similar designs may be employed to impart earthquake resistance in eight to ten story buildings that are prone to destructive earthquake induced resonances. To increase the shielded range of forcing frequencies, the concept of Multi-Frequency Quieting Building System (MFQBS) was developed.
Active damping with fallback. Very tall buildings ("skyscrapers"), when built using modern lightweight materials, might sway uncomfortably (but not dangerously) in certain wind conditions. A solution to this problem is to include at some upper story a large mass, constrained, but free to move within a limited range, and moving on some sort of bearing system such as an air cushion or hydraulic film. Hydraulic pistons, powered by electric pumps and accumulators, are actively driven to counter the wind forces and natural resonances. These may also, if properly designed, be effective in controlling excessive motion - with or without applied power - in an earthquake. In general, though, modern steel frame high rise buildings are not as subject to dangerous motion as are medium rise (eight to ten story) buildings, as the resonant period of a tall and massive building is longer than the approximately one second shocks applied by an earthquake.
Reinforcement. The most common form of seismic retrofit to lower buildings is adding strength to the existing structure to resist seismic forces. The strengthening may be limited to connections between existing building elements or it may involve adding primary resisting elements such as walls or frames, particularly in the lower stories.
Connections between buildings and their expansion additions. Frequently, building additions will not be strongly connected to the existing structure, but simply placed adjacent to it, with only minor continuity in flooring, siding, and roofing. As a result, the addition may have a different resonant period than the original structure, and they may easily detach from one another. The relative motion will then cause the two parts to collide, causing severe structural damage. Proper construction will tie the two building components rigidly together so that they behave as a single mass or employ dampers to expend the energy from relative motion, with appropriate allowance for this motion.
Exterior concrete columns. Historic buildings, made of unreinforced masonry, may have culturally important interior detailing or murals that should not be disturbed. In this case, the solution may be to add a number of steel, reinforced concrete, or poststressed concrete columns to the exterior. Careful attention must be paid to the connections with other members such as footings, top plates, and roof trusses
In the United States, this often occurs in the afternoon, especially during the summer months when the air conditioning load is high. The peak power load generally occurs between 4pm and 5pm when people return home from work, start cooking dinner, and turn up the air conditioning. During this time many workplaces are additionally still open and consuming power.
The time that a peaker plant operates may be many hours a day or as little as a few hours per year, depending on the condition of the region's electrical grid. It is expensive to build an efficient power plant, so if a peaker plant is only going to be run for a short or highly variable time, it does not make economic sense to make it as efficient as a base load power plant. In addition, the equipment and fuels used in base load plants are often unsuitable for use in peaker plants because the fluctuating conditions would severely strain the equipment. For these reasons, nuclear, geothermal, waste-to-energy, coal, and biomass plants are rarely, if ever, operated as peaker plants.
Peaker plants are generally gas turbines that burn natural gas. A few burn petroleum-derived liquids, such as diesel oil and jet fuel, but they are usually more expensive than natural gas, so their use is limited. However, many peaker plants are able to use petroleum as a backup fuel. The thermodynamic efficiency of single-cycle gas turbine power plants ranges from 20 to 42%, with between 30 to 42% being average for a new plant.
For greater efficiency, a Heat Recovery Steam Generator (HRSG) is added at the exhaust. This is known as a combined cycle plants. Cogeneration uses waste exhaust heat for process or other heating uses. Both of these options are used only in plants that are intended to be operated for longer periods than usual. Reciprocating engines are sometimes used for smaller peaker plants.
Larger peaking plants that operate for extended periods of time are almost always required to have emissions controls and monitoring equipment. Smaller plants in the 3 MW range or less are often excluded from controlling their emissions due to the low number of hours they accumulate per year.
Although gas turbine plants dominate the peaker plant category, other plants may provide power on a peaking basis. Some hydroelectric plants are operated this way. Storage technologies like pumped storage can be used to provide peak load power. Photovoltaic arrays deliver most of their energy during peak load hours, so sometimes they are also included in the peaker class of power plants.
The opposite of a peaking plant are base load power plants, which operate continuously, stopping only for maintenance or unexpected outages. Intermediate plants operate between these extremes, curtailing their output in periods of low demand, such as during the night. Base load and intermediate plants are used preferentially to meet electrical demand because the lower efficiencies of peaker plants make them more expensive to operate.
Introduced in 1955, their "QO" line of 3/4-inch circuit breakers may be their best-known product line, used in the electrical distribution boards of many homes in North America. Another well-known Square D product line is the Powerlink circuit breaker, sold to bigger buildings/skyscrapers. It was created for lighting control and can remotely operate lights in buildings through the internet.
Strategies are different from tactics in that: they are proactive and not re-active as tactics are; they are internal in source, and the business venture has absolute control over their application; and strategy can only be applied once, after that it is process of application with no unique element remaining. The outcome is normally a strategic plan which is used as guidance to define functional and divisional plans, including Technology, Marketing, etc.
Strategic planning is the formal consideration of an organization's future course. All strategic planning deals with at least one of three key questions:
"What do we do?"
"For whom do we do it?"
"How do we excel?"
In business strategic planning, the third question is better phrased "How can we beat or avoid competition?". (Bradford and Duncan, page 1). In many organizations, this is viewed as a process for determining where an organization is going over the next year or more -typically 3 to 5 years, although some extend their vision to 20 years.
In order to determine where it is going, the organization needs to know exactly where it stands, then determine where it wants to go and how it will get there. The resulting document is called the "strategic plan".
It is also true that strategic planning may be a tool for effectively plotting the direction of a company; however, strategic planning itself cannot foretell exactly how the market will evolve and what issues will surface in the coming days in order to plan your organizational strategy. Therefore, strategic innovation and tinkering with the 'strategic plan' have to be a cornerstone strategy for an organization to survive the turbulent business climate.
About 200 countries grow the crop to produce 1,324.6 million tons (more than six times the amount of sugar beet produced). As of the year 2005, the world's largest producer of sugar cane by far is Brazil followed by India. Uses of sugar cane include the production of sugar, Falernum, molasses, rum, soda, cachaça (the national spirit of Brazil) and ethanol for fuel. The bagasse that remains after sugarcane crushing may be burned to provide both heat - used in the mill, and electricity - typically sold to the consumer electricity grid. It may also, because of its high cellulose content, be used as raw material for paper and cardboard, branded as "environmentally friendly" as it is made from a by-product of sugar production.
Fiber from Bengal Cane (Saccharum munja or Saccharum bengalense) is also used to make mats, screens or baskets etc. in West Bengal. This fiber is also used in Upanayanam - a rite-of-passage ritual in India and therefore is also significant religiously.
Switchgrass is a hardy, perennial rhizomatous grass which begins growth in late spring. It can grow up to 1.8-2.2 m high but is typically shorter than Big Bluestem grass or Indiangrass. The leaves are 30-90 cm long, with a prominent midrib. Switchgrass uses C4 carbon fixation, giving it an advantage in conditions of drought and high temperature. Its flowers have a well-developed panicle, often up to 60 cm long and bear a good crop of fruits. The fruits are 3-6 mm long and up to 1.5 mm wide, and are developed from a single-flowered spikelet. Both glumes are present and well developed. When ripe, the seeds sometimes take on a pink or dull-purple tinge, and turn golden brown with the foliage of the plant in the fall. Switchgrass is a self-seeding crop, which means farmers do not have to plant and re-seed after annual harvesting. Once established, a switchgrass stand can survive for ten years or longer. Also, unlike corn, switchgrass can grow on marginal lands and requires little or no fertilizer to thrive
The terms trickle filter, trickling biofilter, biofilter, biological filter and biological trickling filter are often used to refer to a trickling filter. These systems have also been described as intermittent filters, packed media bed filters, alternative septic systems, percolating filters, attached growth processes, and fixed film processes.
The treatment of sewage or other wastewater with trickling filters is among the oldest and most well characterized treatment technologies.
Sewage treatment trickle filters. Onsite sewage facilities (OSSF) are recognized as viable, low-cost, long-term, decentralized approaches to sewage treatment if they are planned, designed, installed, operated and maintained properly (USEPA, 1997).
Sewage trickling filters are used in areas not serviced by municipal wastewater treatment plants (WWTP). They are typically installed in areas where the traditional septic tank system are failing, cannot be installed due to site limitations, or where improved levels of treatment are required for environmental benefits such as preventing contamination of ground water or surface water. Sites with a high water table, high bedrock, heavy clay, small land area, or which require minimal site destruction (for example, tree removal) are ideally suited for trickling filters.
All varieties of sewage trickling filters have a low and sometimes intermittent power consumption. They can be somewhat more expensive than traditional septic tank-leach field systems, however their use allows for better treatment, a reduction in size of disposal area, less excavation, and higher density land development.
Configurations and components. All sewage trickling filter systems share the same fundamental components: a septic tank for fermentation and primary settling of solids; a filter medium upon which beneficial microbes (biomass, biofilm) are promoted and developed; a container which houses the filter medium; a distribution system for applying wastewater to be treated to the filter medium; and a distribution system for disposal of the treated effluent
By treating septic tank effluent before it is distributed into the ground, higher treatment levels are obtained and smaller disposal means such as leach field, shallow pressure trench or area beds are required.
Systems can be configured for single-pass use where the treated water is applied to the trickling filter once before being disposed of, or for multi-pass use where a portion of the treated water is cycled back to the septic tank and re-treated via a closed-loop. Multi-pass systems result in higher treatment quality and assist in removing Total Nitrogen (TN) levels by promoting nitrification in the aerobic media bed and denitrification in the anaerobic septic tank.
Trickling filters differ primarily in the type of filter media used to house the microbial colonies. Types of media most commonly used include plastic matrix material, open-cell polyurethane foam, sphagnum peat moss, recycled tires, clinker, gravel, sand and geotextiles. Ideal filter medium optimizes surface area for microbial attachment, wastewater retention time, allows air flow, resists plugging and does not degrade. Some residential systems require forced aeration units which will increase maintenance and operational costs.
Wastewaters from a variety of industrial processes have been treated in trickling filters. Such industrial wastewater trickling filters consist of two types: large tanks or concrete enclosures filled with plastic packing or other media; and vertical towers filled with plastic packing or other media.
The availability of inexpensive plastic tower packings has led to their use as trickling filter beds in tall towers, some as high as 20 meters. As early as the 1960s, such towers were in use at: the Great Northern Oil's Pine Bend Refinery in Minnesota; the Cities Service Oil Company Trafalgar Refinery in Oakville, Ontario and at a kraft paper mill.
The treated water effluent from industrial wastewater trickling filters is very often subsequently processed in a clarifier-settler to remove the sludge that sloughs off the microbial slime layer attached to the trickling filter media.
Currently, some of the latest trickle filter technology involves aerated biofilters which are essentially trickle filters consisting of plastic media in vessels using blowers to inject air at the bottom of the vessels, with either downflow or upflow of the wastewater.
A programmer can put together an application using the components provided with Visual Basic itself. Programs written in Visual Basic can also use the Windows API, but doing so requires external function declarations.
The final release was version 6 in 1998. Microsoft's extended support ended in February 2008 and the designated successor was Visual Basic .NET.
Sewage is created by residences, institutions, hospitals and commercial and industrial establishments. It can be treated close to where it is created (in septic tanks, biofilters or aerobic treatment systems), or collected and transported via a network of pipes and pump stations to a municipal treatment plant (see sewerage and pipes and infrastructure). Sewage collection and treatment is typically subject to local, state and federal regulations and standards. Industrial sources of wastewater often require specialized treatment processes (see Industrial wastewater treatment).
The sewage treatment involves three stages, called primary, secondary and tertiary treatment. First, the solids are separated from the wastewater stream. Then dissolved biological matter is progressively converted into a solid mass by using indigenous, water-borne microorganisms. Finally, the biological solids are neutralized then disposed of or re-used, and the treated water may be disinfected chemically or physically (for example by lagoons and micro-filtration). The final effluent can be discharged into a stream, river, bay, lagoon or wetland, or it can be used for the irrigation of a golf course, green way or park. If it is sufficiently clean, it can also be used for groundwater recharge.
Raw influent (sewage) includes household waste liquid from toilets, baths, showers, kitchens, sinks, and so forth that is disposed of via sewers. In many areas, sewage also includes liquid waste from industry and commerce. The draining of household waste into greywater and blackwater is becoming more common in the developed world, with greywater being permitted to be used for watering plants or recycled for flushing toilets. A lot of sewage also includes some surface water from roofs or hard-standing areas. Municipal wastewater therefore includes residential, commercial, and industrial liquid waste discharges, and may include stormwater runoff. Sewage systems capable of handling stormwater are known as combined systems or combined sewers. Such systems are usually avoided since they complicate and thereby reduce the efficiency of sewage treatment plants owing to their seasonality. The variability in flow also leads to often larger than necessary, and subsequently more expensive, treatment facilities. In addition, heavy storms that contribute more flows than the treatment plant can handle may overwhelm the sewage treatment system, causing a spill or overflow (called a combined sewer overflow, or CSO, in the United States). It is preferable to have a separate storm drain system for stormwater in areas that are developed with sewer systems.
As rainfall runs over the surface of roofs and the ground, it may pick up various contaminants including soil particles and other sediment, heavy metals, organic compounds, animal waste, and oil and grease. Some jurisdictions require stormwater to receive some level of treatment before being discharged directly into waterways. Examples of treatment processes used for stormwater include sedimentation basins, wetlands, buried concrete vaults with various kinds of filters, and vortex separators (to remove coarse solids).
The site where the raw wastewater is processed before it is discharged back to the environment is called a wastewater treatment plant (WWTP). The order and types of mechanical, chemical and biological systems that comprise the wastewater treatment plant are typically the same for most developed countries:
Removal of large objects
Removal of sand and grit
Oxidation bed (oxidizing bed) or aeration system
Chemical treatment (this step is usually combined with settling and other processes to remove solids, such as filtration. The combination is referred to in the U.S. as physical-chemical treatment.
It started as a simplified subset of the Standard Generalized Markup Language (SGML), and is designed to be relatively human-legible. By adding semantic constraints, application languages can be implemented in XML. These include XHTML, RSS, MathML, GraphML, Scalable Vector Graphics, MusicXML, and thousands of others. Moreover, XML is sometimes used as the specification language for such application languages.
XML is recommended by the World Wide Web Consortium (W3C). It is a fee-free open standard. The recommendation specifies both the lexical grammar and the requirements for parsing.