第五部分(重点)【Most-Influential Technologies of the Future】
The following eight technologies show promise as most influential over the next 2 decades. For the most part, these technologies represent dreams at the moment, at least partly. However, it is only through dreams that the goals of the future can be achieved.
The author believes it is a principal role of oil companies to create the dreams—a vision for the future—and then to col¬laborate with research centers, service companies, academia, other oil companies, and others to achieve these dreams. However, it is not enough just to have dreams. Two points are emphasized: Dreams must be well defined, otherwise they do not form a solid basis for meaningful collaboration for achievement; and we must start pursuing these dreams, because having dreams without pursuing them is fruitless. Therefore, in describing these technologies, the status of their pursuit is included, frequently citing examples from Saudi Aramco by way of illustration.
【Extreme-Reservoir-Contact Wells. 】Maximum-reservoir-contact (MRC) describes smart multilateral wells that encounter more than 3 miles of the reservoir through branches (laterals) from the main wellbore. Such wells became very popular and very effective in draining the reservoir, especially because the increase in the number of laterals leads to a significant increase in productivity, assuming the laterals are properly designed to optimize drainage. This increase is particularly important for tighter, more-heterogeneous reservoirs. The recent Haradh III field of Saudi Aramco, for example, relied exclusively on such wells to produce 300,000 B/D from 32 smart MRC wells.
However, the difficulty with these wells is that there is a limit of only a few smart laterals per well because mechanical control lines to the wellhead are required for each lateral. In the future, smart wells will have a large number of laterals to optimize drainage and access difficult isolated stringers. Replacing hydraulic lines with wireless telemetry to control downhole valves, a goal currently pursued, will eliminate the need for mechanical control lines in these wells. A downhole control module sends wireless commands to every control valve, which would eliminate mechanical lines and would, therefore, facilitate a theoretically unlimited number of smart laterals as well as an unlimited number of valves within each lateral (Fig. 2). Power will be provided to these valves from a rechargeable battery that can be trickle-charged from the energy provided from the flow of the fluids.
Wireless telemetry will enhance many applications, includ¬ing permanent downhole monitoring, because downhole“wiring” is becoming increasingly more difficult with the addition of downhole devices. Alternatively, electrically wired control valves can be used. This approach would not require wireless telemetry, but would still need to send power and signals across short distances at the connection between the main bore and each smart lateral (such that all the laterals are powered from the wellhead). This method can be achieved with inductive-coupling technology.
【Smart Inflow-Control Devices. 】Inflow-control devices allow even distribution of flow throughout the horizontal section of a well by introducing additional drawdown in more-prolific sections of the well (achieved by moving the fluid through helical flow paths within the devices). This balanced production profile impedes gas cusping from the gas cap or water coning from the aquifer, thus increasing economic recovery.
However, existing conventional inflow-control devices have no means of adjusting their configuration once deployed in the well and cannot, therefore, respond to changing well conditions. Smart inflow-control devices will add either autonomous adjusting of flow in the device segments according to the gas or water content in the crude, or electric or wireless control of downhole valves attached to each device segment. These solutions are being pursued, and all would allow the inflow-control devices to respond effectively to changing well conditions with time and to errors in estimating the well-productivity profile before deploying the device.
【Intelligent Autonomous Fields. 】Traditionally, the term intelligent field refers to integrating all relevant field information—including reservoir pressure and temperature, wellhead fluid composition, pipeline flow, and plant informa¬tion—such that the field is managed in real time through live data feeds. Various forms of this concept are in use through subsurface instrumentation linked to central processing. For example, in the Haradh III field, each well is equipped with a permanent downhole-monitoring system that conveys real-time reservoir data to the surface, where these data are integrated to provide real-time monitoring of the field (Fig. 3).
However, future intelligent fields will be much more sophisticated, moving beyond self-monitoring to become fully self-run (i.e., eventually completely autonomous). The field will gather downhole reservoir data, integrate it with wellhead information and management, run a simulation of the reservoir in real time, derive the optimal production and injection allocations, and send commands to downhole control valves in every well to implement this self-generated production strategy. The field also will analyze the data constantly in real time for effective data mining and control; for example, it can identify wells with water breakthrough by comparing the downhole and surface pressure and tempera¬ture measurements to detect trend anomalies to identify the onset of the flood front. The role of the reservoir engineer in these autonomous fields will be monitoring and oversight rather than intervention and control.
【Passive-Seismic Monitoring. 】Thousands of earthquakes occur frequently that have very faint magnitudes of −1, −2, and lower and have no tangible effect. Their signal can hardly be recorded by normal means. Passive-seismic monitoring involves recording of this faint natural seismicity (sometimes called microseismicity) at the reservoir level to infer the distribution of faults and fractures around the wellbore and thereby map the flow conduits away from the well locations. This monitoring is accomplished without active seismic sources such as vibrators or dynamite (Fig. 4).
This approach enables monitoring the reservoir in real time rather than as time-lapsed (as with 4D seismic), and it has the potential of introducing a new method of analyzing and monitoring fluid migration through the reservoir (Dasgupta 2005), pushing the effectiveness of reservoir management to a new plateau. A recent passive-monitoring symposium was oversubscribed, and received attendees from more than 50 countries with applications ranging from fault characterization, to monitoring stimulation jobs, to deducing the effect of production and injection. This technology is in its infancy but is growing at an explosive rate, and it holds promise to revolutionize how seismic data are gathered and exploited.
【Gigacell Simulation. 】The widespread use of 3D-seismic data and sophisticated modeling algorithms has resulted in detailed geologic models that describe reservoir properties in high resolution. However, most of that detail is lost when these models are used for flow simulation because current simulators are incapable of handling a large number of cells. These high-resolution models are first “scaled up” before simulation, in effect severely reducing the resolution by smearing and averaging data in the effort to reduce the number of cells. Future reservoir simulators will be able to simulate giant fields [even the mammoth Ghawar field, which extends more than 280×26 km (Fig. 5)] in high resolution by use of the geologic models directly without scaling up. To achieve this goal, reservoir simulators must be capable of handling many more cells than is possible currently, and they need to move from the megacell models (tens to hundreds of millions of cells) to gigacell models (billions of cells).
Several efforts in pursuit of this dream are under way. For example, Saudi Aramco routinely runs simulation models having 30 to 40 million cells through its in-house-developed simulator (Dogru et al. 2002). However, the next generations of simulators will handle much larger models. Prototype runs in Saudi Aramco recently achieved successful testing of its algorithms on a 258-million-cell full-field Ghawar model. That model simulated 60 years of history and required approximately 1 day to run on a cluster of commonly avail¬able PCs. Initial results indicate that the more-refined 258- million-cell model shows water-cut behavior more in line with the physics and the field data, compared with the results with the old 10-million-cell model. The benefit of simulating the reservoir in higher resolution is tangible. With the hundred-million-cell record established, the billion-cell record will be reached soon.
The increased capacity of these simulators must come from both algorithm novelty and improved hardware. The latter alone is by far insufficient. As shown in Fig. 6, an almost-linear scalability would be required to capitalize on the new com¬puter clusters. Innovative visualization techniques also would be required because the volume of data is such that it cannot be visualized effectively with conventional methods. Methods are being tested that allow users not only to see but also to feel the data (by navigating the reservoir through 3D haptic control devices with force-feedback) and hear the data (through smart voice alerts tied to specific events reached by the simulator).
【Smart Fluids. 】Smart fluids are those placed in the reservoir to impart a specific desired behavior (for example, to completely plug watered-out zones while allowing the oil to flow in other zones). At first, they will be used to change the near-wellbore regions, but eventually they will be deployed deeper into the reservoir to change its properties on a much larger scale. These fluids will be custom-fit and will impart the desired behavior in the reservoir automatically. In other words, they could be bullheaded into the reservoir and left to their own means to work automatically, without requiring any sophisticated deployment techniques such as zonal isolation and coiled tubing. This technology is progressing, through relative permeability modifiers and smart emulsified gels, although it currently is available only for limited reservoir conditions and with mixed success. As an example, such fluids can hydrate and expand in the presence of water, plugging the pores and preventing water movement, while shedding the water and dehydrating and contracting in the presence of oil, thus allowing the oil to move (Fig. 7). Therefore, these fluids can achieve rigless water shutoff just by use of chemicals that reduce the relative permeability of water, blocking its flow in watered-out zones while preserving the oil flow.
【Bionic Wells. 】Wells of the future could be like plants. These wells would not be drilled; they would be planted. A tree root seeks a wet area in the soil, extending a branch of the root to that zone, then cuts off that branch once that area dries up, growing another branch to a different area, and so forth. Bionic wells would act similarly (but would follow the oil rather than the water). Once the vertical segment of the well was drilled (planting the well), the well would be “left to its own means.” A smart lateral would extend to an undrained oil-bearing zone, “cut off” that lateral once the zone waters out, and “extend” another lateral to a different zone, and so on (Fig. 8).
Though this concept may seem too futuristic, the industry has achieved much of that dream (Fig. 9). Starting with vertical wells (like the root of a simple tree), horizontal wells were drilled (a more sophisticated root), and then multilaterals were drilled (similar to tree roots with several branches). Thereafter, smart downhole-control valves were added that can choke specific laterals and effectively cut off those branches (like a root cuts off one of its branches), then downhole monitoring and surface controls were added that enable analyzing the reservoir-fluid properties and predicting the onset of water (similar to the root deciding when a zone has dried up). All that is reality now. And very complicated (certainly root-like) wells are being pursued, such as the extreme-reservoir-contact well described earlier.
The remaining technology is advancement in drilling that would allow the well to “drill for itself.” Admittedly, this goal is not easy, but techniques such as coiled-tubing drilling and drilling by fluid jetting exist, while others such as laser drilling are in research. These wells could have openhole comple¬tions, with downhole valves that are viscoelastic rather than mechanical, such that they change their rheology to open or close once they react with a dispatched chemical that is specially coded for the composition of that particular valve.
【Reservoir Nanorobots.】 These robots are 1/100th the diameter of human hair and would be deployed in large numbers with the injected fluids into the reservoir. During their journey, they would analyze reservoir pressure, temperature, and fluid type and store that information in on-board memory. They then would be picked up (at least a small portion of them) from produced fluids to download that information to provide critical data about the reservoir encountered during their journey, thus effectively mapping the reservoir (depending on the size of the field, this journey can take several months). Eventually, real-time communication (perhaps over short hops to downhole telemetry stations) and mobility (powered by charging from the friction with the fluids or from downhole charge stations) also could be added (Fig. 10).
Imagine sending these robots ahead of the drilling bit instead of geosteering, or sending them from a discovery well to find the edges of the reservoir and the water contact, eliminating the delineation program. The possibilities are endless. Farfetched perhaps? Not at all. Advances in nanosensor miniaturization are occurring rapidly. Nanotechnology has made significant strides in material and medical applications, but not so in the oil industry. However, efforts are underway to bring these advances into our industry. For example, an SPE Advanced Technology Workshop, “Nanotechnology in Upstream E&P: Nano-Scale Revolutions to Mega-Scale Challenges,” was held 3–6 February 2008 for that purpose.
The long journey to achieving functioning nanorobots starts with answering a very simple question: What is the largest-size robot that can go through the reservoir without getting caught in the pore throats? After all, there is no sense in wasting time and money on nanorobots if microrobots can do the job (microrobots can be manufactured now). However, there also is no use in deploying these robots only to then be caught in the pore throats around the wellbore, abruptly halt¬ing their journey and possibly damaging the reservoir.
To that end, Saudi Aramco has analyzed 850 core plugs from the Arab-D reservoir in Ghawar and mapped the distribution of the pore-throat sizes. The distribution is bimodal, but the important observation is that most of the pore throats are larger than approximately 500 nm. This, then, establishes an initial target on which our miniaturization efforts should focus (actually, to avoid bridging, the size of the robots must be approximately one-fourth of this pore-throat size). The next step is to conduct a physical experiment in which nanoparticles (dumb nanorobots) of specific size (using different sizes from this distribution) and at prescribed concentrations are injected into representative Ghawar core plugs. The number of these particles that manage to go through the plug end-to-end and emerge from the other side is counted—such that the size question can be answered empirically. This experiment is being conducted (Fig. 11). Likewise, the journey of the nanorobot through the pore structure is being simulated and mod¬eled in software. In other words, the first milestone in the pursuit of the nanorobot dream would be answering the size question in three ways: by observing the pore-throatsize distribution, conducting an empirical experiment of nanoparticle injection, and by software simulation.