Numerical simulation was used to evaluate the effectiveness of an oil spill response plan developed by Western Canada Marine Response Corporation (WCMRC) for the southwest coast of Canada. The plan was part of the permitting process for a proposed terminal expansion that would result in an increase in tanker traffic. The purpose of this response evaluation was to point the way to the developmen…
Using GC/MS analysis means, distribution of n-alkanes, PAHs, and biomarkers and consistency of diagnostic ratios of different kinds of oil were studied. The results indicate that: (1) Some diagnostic ratios of different oils are not obviously different and may even be consistent. These diagnostic ratios include two n-alkanes diagnostic ratios ((C19+C20)/(C19~C22) and CPI), four terpanes diag…
Abstract: Childhood malnutrition remains common in India. We visited families in 40 urban informal settlement areas in Mumbai to document stunting, wasting, and overweight in children under five, and to examine infant and young child feeding (IYCF) in children under 2 years. We administered questions on eight core WHO IYCF indicators and on sugary and savory snack foods, and measured weight a…
For the purpose of early discovering of tank zone oil spill accidents and making up the shortfall, the fuzzy comprehensive evaluation method was used to construct the oil spill risk fuzzy comprehensive evaluation model. Firstly, the historical data of oil spill accidents in China National Petroleum Corporation (CNPC) was collected and analyzed, and the questionnaire was conducted to the field s…
A set of sequentially weathered oils residue in sediments collected from Dalian Bay in different time after “7-16” oil spill accident, were analyzed by Gas chromatography–mass spectrometry (GC-MS) and Gas chromatography–isotope ratio mass spectrometry (GC-IRMS) to analyze the weathering process and evaluated the possibilities of GC-IRMS as a correlation tool in oil spill identification…
With offshore oil and gas towards deepwater, deepwater pipeline oil spill is certainly increasing in the foreseeable future. Therefore, it is a prospective study to investigate deepwater pipeline oil spill emergency repair methods. This paper investigates the overseas and domestic research status on deepwater oil spill emergency repair. Different deepwater pipeline oil spill emergency repair me…
Deepwater pipeline failure may lead to huge economic lost and even environmental pollution. Research on deepwater pipeline emergency repair method is very important for safe marine oil production. This paper discussed deepwater pipeline damages and their patterns first. Then corresponding countermeasures were investigated. The demand analyses of necessary maintain equipment used in repairing wa…
Abstract: This study reviews research themes and methods used in information technology (IT) in government and e-government research. Although IT/e-government studies (including inward aspects of IT applications in government and e-government studies) continue to increase, they are not comprehensively understood as a subfield within public administration. Based on Rosenbloom’s three competin…
Due to the effect of multiple factors, such as tide, temperature and pressure, deepwater oil spill is greatly different from surface oil spill in terms of behavior and fate. In this paper, according to the deepwater environmental characteristics and oil properties in the South China Sea, we establish a high-resolution hydrodynamic background data field and build a model of deepwater oil spill a…
A multi-grid regional ocean circulation model is established on the basis of ROMS to develop the ocean current forecast system for the South China Sea. The model is first spun up through integration for 15 years with annually cyclic sea surface forcing condition to reach a stationary annually cyclic circulation fields. Then the model is integrated from February 2006 to September 2012 driven by …
SUMMARY: A huge obstacle to replacing gasoline as the main energy carrier in automotive applications is the relatively low energy density of suitable replacements. Gaseous energy carriers must be stored at high pressure or very low temperature in order to obtain an energy density suitable for vehicular use. High pressure gas storage requires large, cylindrical containers that occupy large amou…
Policy-based markets for environmental services include government procurement, private procurement to satisfy regulatory requirements and private procurement through government offset markets. These markets are increasingly popular and raise questions about optimal procurement under different regulatory frameworks. The design of these schemes draws together issues in auction design and contrac…
Aim: The purpose of the present study was to evaluate vaginal fluid b-human chorionic gonadotrophin (b-hCG) for the diagnosis of preterm premature rupture of membranes (PPROM). Material and Methods: An observational cross-sectional study was performed on 123 pregnant women whowere in the third trimester of their gestation (28–37 weeks). The patients were divided into three groups: (i)PPROM…
This paper focuses on the analysis and evaluation of the China’s HJ-1C SAR satellite with respect to oil spill identification. Six HJ-1C SAR images (S band, VV Pol, StripMap mode, 5 m resolution, Level-2) in a track in December 2012, covering the coastal region of Fujian Province of China, are used in this study.A comparison of the detectability of oil spill in between HJ-1C and Envisat ASAR …
The detection of different thickness of spilled oil film is internationally recognized as one of the difficult problems. Hyperspectral remote sensing data can provide continuous spectrum, and be beneficial for the identification of oil film. The traditional detection method failed to make full use of different spectral characteristics of the oil film, so is unable to improving the identificatio…
Scientific studies have yielded evidence to support the common perception that climatic variables and associated natural resources and human systems are being affected by external forcings. Detection and attribution (D&A) of climate change provides a formal tool to decipher the complex causes of climate change. This work aims to statistically detect such climatic change signals, if any, in the …
Transients can introduce large pressure forces and rapid fluid accelerations in to a water distribution system. These disturbances may result in pump and device failures, system fatigue and pipe ruptures or bursts and even the intrusion of dirty water. Proper analysis of transient and risk factors of existing networks helps in the formulation of low-cost, long-range non-destructive pipe conditi…
While John Kingdon’s Multiple Streams Approach (MSA) remains a key reference point in the public policy literature, few have attempted to assess MSA holistically. To assess its broader impact and trends in usage, we combine in-depth analysis of representative studies, with comprehensive coverage of MSA-inspired articles, to categorize its impact. We find that Kingdon’s work makes two sep…
Bangalore, the IT capital of India, is one of the fastest growing city of Asia. Rapid urbanization of Bangalore during the last two decades has posed a serious threat to the existence of ecological systems, specifically water bodies which play a crucial part in supporting life. Remote Sensing Satellite images can play a significant role in investigation, dynamic monitoring and planning of natur…
In the wake of changing climate the present water crisis seems to tighten its hold on the Mankind hence water resources estimation is integral part of planning, development and management of water resources of the country and the estimation of water resource is based on several hydrological and meteorological parameters. Rainfall is the main source of the ground and surface water resources. Rec…
Annual weather cycle of India comprises mainly wet and dry periods with monsoonal rains as one of the significant wet periods. All India monsoon rainfall shows strong spatio-temporal variations and large departures from its normal values. It is proposed in this study to document climatological characteristics, fluctuation features and periodic cycles in annual, seasonal and monthly rainfall ser…
Technology scaling has raised the specter of myriads of cheap, but unreliable and/or stochastic devices that must be creatively combined to create a reliable computing system. This has renewed the interest in computing that exploits stochasticity—embracing, not combating the device physics. If a stochastic representation is used to implement a programmable general-purpose architecture akin to…
Long-latency cache accesses cause significant performance-impacting delays for both in-order and out-of-order processor systems. To address these delays, runahead pre-execution has been shown to produce speedups by warming-up cache structures during stalls caused by long-latency memory accesses. While improving cache related performance, basic runahead approaches do not otherwise utilize resul…
Weighted speedup is nowadays the most commonly used multiprogram workload performance metric. Weighted speedup is a weighted-IPC metric, i.e., the multiprogram IPC of each program is first weighted with its isolated IPC. Recently, Michaud questions the validity of weighted-IPC metrics by arguing that they are inconsistent and that weighted speedup favors unfairness [4]. Instead, he advocates us…
Associative Processor (AP) combines data storage and data processing, and functions simultaneously as a massively parallel array SIMD processor and memory. Traditionally, AP is based on CMOS technology, similar to other classes of massively parallel SIMD processors. The main component of AP is a Content Addressable Memory (CAM) array. As CMOS feature scaling slows down, CAM experiences scalabil…
To make applications with dynamic data sharing among threads benefit from GPU acceleration, we propose a novel software transactional memory system for GPU architectures (GPU-STM). The major challenges include ensuring good scalability with respect to the massively multithreading of GPUs, and preventing livelocks caused by the SIMT execution paradigm of GPUs. To this end, we propose (1) a hiera…
Flash storage devices behave quite differently from hard disk drives (HDDs); a page on flash has to be erased before it can be rewritten, and the erasure has to be performed on a block which consists of a large number of contiguous pages. It is also important to distribute writes evenly among flash blocks to avoid premature wearing. To achieve interoperability with existing block I/O subsystem…
Dynamic voltage and frequency scaling (DVFS) is a key technique for reducing processor power consumption in mobile devices. In recent years, mobile system-on-chips (SoCs) has supported DVFS for embedded graphics processing units (GPUs) as the processing power of embedded GPUs has been increasing steadlily. The major challenge of applying DVFS to a processing unit is to meet the quality of serv…
Given the increase of runtime managed code environments in desktop, server, and mobile segments, agile, flexible, and accurate performance monitoring capabilities are required in order to perform wise code transformations and optimizations. Common profiling strategies, mainly based on instrumentation and current performance monitoring units (PMUs), are not adequate and new innovative designs ar…
We present for the first time the concept of per-task energy accounting (PTEA) and relate it to per-task energy metering (PTEM). We show the benefits of supporting both in future computing systems. Using the shared last-level cache (LLC) as an example:(1) We illustrate the complexities in providing PTEM and PTEA; (2) we present an idealized PTEM model and an accurate and low-cost implementation…
Non-volatile memory (NVM) technology holds promise to replace SRAM and DRAM at various levels of the memory hierarchy. The interest in NVM is motivated by the difficulty faced in scaling DRAM beyond 22 nm and, long-term, lower cost per bit. While offering higher density and negligible static power (leakage and refresh), NVM suffers increased latency and energy per memory access. This paper dev…
In out-of-order (OoO) processors, speculative execution with high branch prediction accuracy is employed to achieve good single thread performance. In these processors the branch prediction unit tables (BPU) are accessed in parallel with the instruction cache before it is known whether a fetch group contains branch instructions. For integer applications, we find 85 percent of BPU lookups are d…
This paper proposes persistent transactional memory (PTM), a new design that adds durability to transactional memory (TM) by incorporating with the emerging non-volatile memory (NVM). PTM dynamically tracks transactional updates to cache lines to ensure the ACI (atomicity, consistency and isolation) properties during cache flushes and leverages an undo log in NVM to ensure PTM can always consis…
Memory bottleneck has always been a major cause for limiting the performance of computer systems. While in the past latency was the major concern, today, lack of bandwidth becomes a limiting factor as well, as a result of exploiting more parallelism with the growing number of cores per die, which intensifies the pressure on the memory bus. In such an environment, any additional traffic to memor…
Integrated CPU-GPU architectures with a fully addressable shared memory completely eliminate any CPU-GPU data transfer overhead. Since such architectures are relatively new, it is unclear what level of interaction between the CPU and GPU attains the best energy efficiency. Too coarse grained or larger kernels with fairly low CPU - GPU interaction could cause poor utilization of the shared resou…
Abstract—In this letter, a flexible memory simulator - NVMain 2.0, is introduced to help the community for modeling not only commodity DRAMs but also emerging memory technologies, such as die-stacked DRAM caches, non-volatile memories (e.g., STT-RAM, PCRAM, and ReRAM) including multi-level cells (MLC), and hybrid non-volatile plus DRAM memory systems. Compared to existing memory simulators, N…
Many-Accelerator (MA) systems have been introduced as a promising architectural paradigm that can boost performance and improve power of general purpose computing platforms. In this paper, we focus on the problem of resource under-utilization, i.e. Dark Silicon, in FPGA-based MA platforms. We show that except the typically expected peak power budget, on-chip memory resources form a severe under…
Switch on Event Multithreading (SoE MT, also known as coarse-grained MT and block MT) processors run multiple threads on a pipeline machine, while the pipeline switches threads on stall events (e.g., cache miss). The thread switch penalty is determined by the number of stages in the pipeline that are flushed of in-flight instructions. In this paper, Continuous Flow Multithreading (CFMT), a new …
To address the Dark Silicon problem, architects have increasingly turned to special-purpose hardware accelerators to improve the performance and energy efficiency of common computational kernels, such as encryption and compression. Unfortunately, the latency and overhead required to off-load a computation to an accelerator sometimes outweighs the potential benefits, resulting in a net decrease …
A novel method to protect a system against errors resulting from soft errors occurring in the virtual address (VA) storing structures such as translation lookaside buffers (TLB), physical register file (PRF) and the program counter (PC) is proposed in this paper. The work is otivated by showing how soft errors impact the structures that store virtual page numbers (VPN). A solution is proposed b…
Power mismatching between supply and demand has emerged as a top issue in modern datacenters that are under-provisioned or powered by intermittent power supplies. Recent proposals are primarily limited to leveraging uninterruptible power supplies (UPS) to handle power mismatching, and there fore lack the capability of efficiently handling the irregular peak power mismatches. In this paper we p…
Web browsing on mobile devices is undoubtedly the future. However, with the increasing complexity of webpages, the mobile device’s computation capability and energy consumption become major pitfalls for a satisfactory user experience. In this paper, we propose a mechanism to effectively leverage processor frequency scaling in order to balance the performance and energy consumption of mobile w…
JavaScript is a sequential programming language, and Thread-Level Speculation has been proposed to dynamically extract parallelism in order to take advantage of parallel hardware. In previous work, we have showed significant speed-ups with a simple on/off speculation heuristic. In this paper, we propose and evaluate three heuristics for dynamically adapt the speculation: a 2-bit heuristic, an e…
Bitwise operations are an important component of modern day programming, and are used in a variety of applications such as databases. In this work, we propose a new and simple mechanism to implement bulk bitwise AND and OR operations in DRAM, which is faster and more efficient than existing mechanisms. Our mechanism exploits existing DRAM operation to perform a bitwise AND/OR of two DRAM rows c…
We study the tradeoffs between Many-Core machines like Intel’s Larrabee and Many-Thread machines like Nvidia and AMD GPGPUs. We define a unified model describing a superposition of the two architectures, and use it to identify operation zones for which each machine is more suitable. Moreover, we identify an intermediate zone in which both machines deliver inferior performance. We study the sh…
gem5-gpu is a new simulator that models tightly integrated CPU-GPU systems. It builds on gem5, a modular full-system CPU simulator, and GPGPUSim, a detailed GPGPU simulator. gem5-gpu routes most memory accesses through Ruby, which is a highly configurable memory system in gem5. By doing this, it is able to simulate many system configurations, ranging from a system with coherent caches and a si…
Over the past few years, there has been vast growth in the area of the web browser as an applications platform. One example of this trend is Google’s Native Client (NaCl) platform, which is a software-fault isolation mechanism that allows the running of native x86 or ARM code on the browser. One of the security mechanisms employed by NaCl is that all branches must jump to the start of a valid…
Consider a workload comprising a consecutive sequence of program execution segments, where each segment can either be executed on general purpose processor or offloaded to a hardware accelerator. An analytical optimization framework based on MultiAmdhal framework and Lagrange multipliers, for selecting the optimal set of accelerators and for allocating resources among them under constrained are…
Memory access times are the primary bottleneck for many applications today. This “memory wall” is due to the performance disparity between processor cores and main memory. To address the performance gap, we propose the use of custom memory subsystems tailored to the application rather than attempting to optimize the application for a fixed memory subsystem. Custom subsystems can take advant…
With the trend towards increasing number of cores in a multicore processors, the on-chip network that connects the cores needs to scale efciently. In this work, we propose the use of high-radix networks in on-chip networks and describe how the attened buttery topology can be mapped to on-chip networks. By using high-radix routers to reduce the diameter of the network, the attened buttery o…