# A hybrid AI approach to optimize oilfield planning

What is the best way to set up wells in an oil or gas field? It’s a fairly simple question, but the answer can be very complex. Today, a Cal Tech / JPL spinoff company is developing a new approach that combines traditional HPC simulation with deep reinforcement learning performed on GPUs to optimize power extraction.

The well placement game is familiar to oil and gas companies. For years, they have used simulators running on HPC systems to model underground reservoirs. At the top of this model, they use some sort of optimizer to drive the model iterations, with the goal of finding the optimal number, type, and placement of wells for a given field.

The possible combinations of number, type and placement quickly become a difficult math problem, according to Beyond the limits‘Chief Technology Officer for Industrial AI Shahram Farhadi. Even with a “fairly simple” model with one million grids and five wells (one injector well and four producer wells), the number of possible movements is of the order of 10 to the power of 20, he says. In comparison, there are 5 million possible combinations of five-move chess pieces, and 10 to the twelfth possible five-move combinations in Go.

“The optimization problem is combinatorial and it explodes very quickly,” explains Farhadi. “In that sense, you have an optimization that is unsolvable if your only tool is raw research.”

The optimizers that energy companies currently rely on include things like genetic algorithms and particle swarm algorithms, says Farhadi. “They’re all great,” he says, “but they’re optimizers in the simplest sense.”

At Beyond Limits, Farhadi has spearheaded a new approach to optimizer development that harnesses some of the latest advancements in reinforcement learning and deep convolutional neural networks.

Deep learning approaches are able to work with and learn from much larger data pools than traditional machine learning algorithms. One piece of the well placement game puzzle is the radar tomography data that powers traditional physics simulators. But in this case, the results of each successive run of the simulator is really what the deep learning approach relies on. By pairing its new AI-based field planning agent with a traditional physics simulator, Beyond Limits pushes the state of the art in oil well field planning.

According to Farhadi, the new field planner is able to learn from subsequent iterations of the simulator. “Win” is defined as a high Net Present Value (NPV) score, which is essentially the expected overall recovery of the oil or gas minus costs. The HPC model represents the physics of multiphase flow and its unique models, which provide information about well spacing (if you place the wells too close, they will move closer to each other).

“Reinforcement learning tries to learn by combining this representation of states and what has happened,” says Farhadi. *Datanami*. Said, “Think of a sequence of actions, and then the reward of, did we lose or win. And then that reward is fed back into the system so that the system learns to only take actions that are rewarding.”

The approach essentially codifies the advancements of expert systems (this intellectual property was licensed by CalTech / JPL) into a deep learning model comprised of perceptors and reasoners. Perceptors will create labels from the pixels, and then the reasoner will make sense of the labels to determine the world, says Farhadi.

The trick Farhadi and his team came up with was how to map three-dimensional radar tomography data into winning and losing arguments that the deep learning model could act on. The system basically remembers what worked in the previous iterations, which are written in the layers of the neural network, and incorporates that knowledge for each subsequent turn until the improvements stop accumulating and it converges to the optimal response.

“The reinforcement learning paradigm [works]… In a way that you’re sort of trying to memorize states that look like images, in this case 3D images, ”says Farhadi. “It’s pretty new. So we are in fact the first to put it in place. And we had to tweak the algorithms quite a bit to allow them, say, to go from learning a game to learning that game. NPV is more on the continuum space. It’s not just about winning.

Once Farhadi and his team developed the new Field Planner, the next step was to scale it to the limits that the oil companies will need. Beyond Limits, which was founded in Glendale, Calif. In 2014, compared the new model on three different setups, including a 20-core processor system, a 96-core processor system, and a single core. Nvidia GPU A100 system.

Unsurprisingly, the GPU-based system showed the highest performance in benchmark tests, both in terms of iteration time and higher NPV (see figure). The company has previously partnered with an oil company, which achieved a production value of $ 50 million from the project, according to the company.

The hybrid approach to AI allowed a maximum 184% increase in processing speed over standard operations, according to Beyond Limits. The field planner improved the number of simulations performed by 15% compared to other optimization techniques. In addition, the environmental impact was minimized, as the company was able to reduce the number of water injector wells and four oil producers, compared to eight to 12 injection wells and four producers in the standard configuration, says. the society.

“This decreases the amount of drilling required,” explains Fahradi. “But I also think it’s important to abstract a bit that way and think of simulation like any other industrial simulator. We [foresee] similar thing with power plants and refineries.

**Related articles:**

Digital twins and AI keep industry flexible during COVID-19

Texas A&M Reinforcement learning algorithm automates forecasting of oil and gas reserves

Speed up exploration and discovery with remote viewing