Yes, but that still means other customers are paying for the bulk of the cost of the infra upgrades related to the capacity that new consumer needs.
On the occasions I have needed a new (small scale) commercial electricity supply I have had to pay a fairly substantial connection charge. I don't know how that is calculated, I think back to the nearest grid connection point in the road. Quite how you would allocate the costs of the 400kV infrastructure to individual connections eludes me, it could work out that some poor person ordering at the wrong time has to pay millions whereas everyone else before and after pays nothing. I think there does come a point where shared infrastructure is just that.
In a very simplified way, the way I understood it, whenever there was a connection that customer would pay part of the cost of the common pipe they were connecting to. This would ensure that, when that "pipe" needs upgrading, the money is available for it..
I have been to the public consultation for one of these data centres. They are planning their dedicated substation for it. I think it takes 275kv in, and delivers at 11kv to each of the d/c buildings. They are having to wait a few years for the connection.
If they would build the d/c in Scotland, that substation would be built there. But essentially they pay the same wherever they build. With a lot of the new production being in the north of GB, the 400kv grid surely needs more upgrades if these huge new data centres are built in the south east instead of in Scotland.
An additional reason for so many data centres is because for "the cloud" which these data centres support, to be 100% robust, they must have a large amount of duplication contained within them. You can have regions of data centres, each region containing multiple data centres in different geographical areas.
Generally the reason is that more capacity is needed, and AI is adding much of the demand..
The cloud is not duplicating work, it just makes it simpler to deploy components in multiple regions etc so they are less likely to fail all at the same time. The multiple components split the processing of request amongst them. If one fails, the request is resent and processed elsewhere.
It is like if you go to charge your car and have several charging stations. They can only serve one or two clients each. The fact one station may not be charging is fine.
In the cloud, extra stations are created and destroyed when needed or not needed..
This post was modified 4 days ago 2 times by Batpred
8kW Solis S6-EH1P8K-L-PLUS hybrid inverter; G99: 8kw export; 16kWh Seplos Fogstar battery; Ohme Home Pro EV charger; 100Amp head, HA lab on mini PC
the reason is that more capacity is needed, and AI is adding much of the demand..
That comment could've been written by AI !
AI itself may or may not require large-scale computational servers.
After all, I can add AI (a neural network) to a Raspberry Pi by using an extra board.
What does take a lot of processing power is AI research, such as might be done by companies developing transport self-driving capability.
The present Government has also made a commitment for Britain to be at the forefront of AI Research. It has designated a University Campus in Bristol to champion that work.
But that's a different policy to the one they're pursuing in relation to data centres.
The link between these issues is that both policies welcome the development of sites that will consume vast quantities of electricity... more than the rest of the entire country.
the reason is that more capacity is needed, and AI is adding much of the demand..
That comment could've been written by AI !
AI itself may or may not require large-scale computational servers.
I agree, AI can be seen merely a type of data processing.
But since that generative AI and LLM interact is so transformational (as it handles many problems that only humans used to be able to) means that "AI industry" is having very high investment and channeling much of it towards significantly more processing capacity being required. It is a fact that over 140 data centres are currently at different stages of planning in GB.
There are many definitions of what AI is and these data centres may just be using the term early on for marketing reasons. Once the detailed design is entered into, there are differences depending on the planned processing.. And in simple terms, the energy consumption for AI is a multiple of the energy consumption of other types of data centres.
Most of the folk that I saw in the consultation for one of these, with a peak demand of 400MW from the 275Kv grid, are not keen to have it nearby, but that is what the current pricing is causing.
The cloud is not duplicating work, it just makes it simpler to deploy components in multiple regions etc so they are less likely to fail all at the same time.
I did not say the cloud duplicates work, I said the cloud has a level of duplication, there is a subtle difference. If you have a commerical website the chances are it will be trading 24/7. To make the website robust your data has to be duplicated on a number of geographcally separate sites. At some point only one site maybe servicing the web requests from customers but the other duplicated websites will be on standby, on live server equipment, consuming power. These duplicated sites however are still required to keep their copy of your data in synchronisation with the website copy that is currently doing the work (still requiring a certain level of processing and power consumption) otherwise your business will very quickly fall apart.
In the cloud, extra stations are created and destroyed when needed or not needed..
It depends on your level and type of subscription. You can have images started up and stopped for short duration work or permanently running images for commerical websites or websites like this one.
Regards
5 Bedroom House in Cambridgeshire, double glazing, 300mm loft insulation and cavity wall insulation
Design temperature 21C @ OAT -2C = 10.2Kw heat loss, deltaT = 8 degrees
Bivalent system containing:
12Kw Samsung High Temperature Quiet (Gen 6) heat pump
26Kw Grant Blue Flame Oil Boiler
4.1Kw Solar Panel Array
34Kwh GivEnergy Stackable Battery System
Demand sites, such as data centres, would no longer be required to contribute towards grid upgrade costs unless their predicted usage triggered a high-cost threshold.
Generation sites would no longer be required to pay any contribution towards reinforcement/upgrade of voltage levels above the one to which they connect. So if the generation site outputs 33kV, it doesn't pay anything towards the required upgrades at 132kV or the 400kV Transmission Grid.
All those grid upgrades for which Demand Sites and Generators would no longer be contributing anything would still go ahead, but the money would come from increased TUoS and DUoS payments, which are part of our consumer bills.
You can easily see the consequences of that decision reflected in the Standing Charge increases for Q2 of 2022:
There was also a jump as ofgem moved costs from variable to fixed costs for end users
In the meantime, Ofgem has changed the software used to host its website. That was designed to break all the historical links you'd saved, thereby preventing us from holding them to account. 🤨
Standing charges for domestic electricity customers have increased significantly since 2021. For a customer who pays for their electricity bills by direct debit, they have more than doubled from £86 per annum to £186 per annum on average between 2021 and 2023. The reason for this increase in electricity standard charges is that suppliers are now having to pay more fixed costs and are passing them on to customers in the form of standing charges rather than a unit cost basis. The first of these costs is some types of network costs (the costs of the infrastructure for getting electricity to customers’ homes and businesses). In 2019, Ofgem took the decision to move charging of certain types of network costs from a unit cost basis to a fixed basis (known as the Targeted Charging Review or TCR), which came into effect in 2022 and 2023. The main reason for this change was that charging on a volumetric basis made it too easy for some users to avoid network costs. The TCR was necessary to future-proof the GB electricity network and to ensure that it is properly ready for flexibility and net zero, and it has made the network more efficient, reducing the overall cost for customers. Whilst these measures will have removed the cost of network charges from (and therefore reduced) unit costs, the wholesale energy crisis means that the effect of this has not been obvious to consumers. The second reason is that the costs of supplier failures in electricity (the costs of appointing Suppliers of Last Resort or SoLR) are recovered from suppliers through 6 Discussion Paper - Standing Charges: Call for Input network costs, which in turn is recovered from suppliers as a fixed cost. We expect that the level of supplier failures that we saw in 2021 and 2022 will be a one-off, in part due to the measures that we have taken to strengthen the retail energy industry. Suppliers have passed these costs on to their customers through standing charges rather than unit rates. It is important to note that these charges would need to be met somehow and would be borne by customers through unit rates if not through standing charges. However, we recognise that standing charges are a particular burden for some consumers. In gas, network costs and SoLR costs are passed to customers through unit rates. We have not seen the same urgent need for change in the gas sector that prompted the TCR in electricity. As a result, gas standing charges have remained broadly static in real terms.
to find a useful starting point on the ofgem website.
I have pasted some of the info from the PDF.
It isn't particularly easy to see the shift in the excel sheets from ofgem particularly as unit rates increased unfortunately due to the energy crisis. So users saw an increase in both unit rates and standing charges, when the aim was to reduce unit rates.
It would be worth keeping a copy of the quarterly updates from Nesta to get a simple picture of how things are shifting. It is one of the better sites, too many news articles are full of inaccurate information.
Nesta haven't yet updated with the latest info so grab a copy now.
Are you somehow connected to ofgem. If so you are very welcome (which doesn't mean you aren't welcome if you aren't connected to often).
If you are connected to ofgem then many thanks for explaining the sequence of events. Please continue to help us to understand! Unfortunately a lot of people seem to think that these matters can be reduced to sensationalist sound bites, about which they can then complain. The truth is more complex and it helps a lot of we understand it.
The cloud is not duplicating work, it just makes it simpler to deploy components in multiple regions etc so they are less likely to fail all at the same time.
I did not say the cloud duplicates work, I said the cloud has a level of duplication, there is a subtle difference. If you have a commerical website the chances are it will be trading 24/7. To make the website robust your data has to be duplicated on a number of geographcally separate sites. At some point only one site maybe servicing the web requests from customers but the other duplicated websites will be on standby, on live server equipment, consuming power. These duplicated sites however are still required to keep their copy of your data in synchronisation with the website copy that is currently doing the work (still requiring a certain level of processing and power consumption) otherwise your business will very quickly fall apart.
I can see your point but being an area I am fairly familiar with, it seemed worth adding to the discussion..
My concern is it may be oversimplified and so does not justify why more data centres are planned and why this requires so much more power.
There are generations of it technologies that allow for automatic reduction of waste. In fact waste is only tolerated if it is not too expensive to pay for keeping. Of course if you have companies willing to pay for technology they do not use, it will consume some power but then the data centres have many methods to reduce that waste 😉.
On my experience it does not have to be so. As one example of a component, when a web server is well tuned and not serving requests, it reduces the resources it requires to the minimum.
And to look at one example from yours, when you way "To make the website robust your data has to be duplicated on a number of geographically separate sites." , you are also staying that a website cannot be made robust unless it is on different sites.
Which is not a fact. There are different types of requirements, some you could call robustness. But this is such a specialised field that there is no such single condition. All i would suggest is some data/config/code stored elsewhere so that site could be started there. That does not need any significant power, in some cases none at all.. so it does not make any difference to the power consumption. Of course in some use cases there is state data that is required and more of this is replicated, but the point remains.
I wish it was that simple, but if so, it would not have taken so many decades to develop AI..
Security requirements alone add much processing but could you run anything important without it? The same goes for being able to recover from a server failure. Can anything important be run without being able to restart it?
In the cloud, extra stations are created and destroyed when needed or not needed..
It depends on your level and type of subscription. You can have images started up and stopped for short duration work or permanently running images for commerical websites or websites like this one.
I just used a metaphor to the client server concept. Which has for decades been highly scalable.
The increase of data centre capacity requirements is unfortunately not preventable by not having cloud.
The same argument about waste would apply to any human activity, farming, logistics whatever. Even if it can always be optimised.
If the AI applications, instances and users would stay the same, then we could expect optimisation and better technology to reduce space requirements. But this is not what investors think... Neither do the organisations taking risks building more data centres. They all expect the demand will grow.
This post was modified 2 days ago 2 times by Batpred
8kW Solis S6-EH1P8K-L-PLUS hybrid inverter; G99: 8kw export; 16kWh Seplos Fogstar battery; Ohme Home Pro EV charger; 100Amp head, HA lab on mini PC
And to look at one example from yours, when you way "To make the website robust your data has to be duplicated on a number of geographically separate sites." , you are also staying that a website cannot be made robust unless it is on different sites.
Which is not a fact. There are different types of requirements, some you could call robustness. But this is such a specialised field that there is no such single condition. All i would suggest is some data/config/code stored elsewhere so that site could be started there. That does not need any significant power, in some cases none at all.. so it does not make any difference to the power consumption. Of course in some use cases there is state data that is required and more of this is replicated, but the point remains.
...
I haven't been following this thread in detail but I do need to pick up on this.
@batpred, I'm afraid you're incorrect. In @technogeek's example, he outlined a commercial website with transactional data, and a simple standby site elsewhere that could be fired up in the event of a failure will not cut it. To make such a website robust, it absolutely is necessary to have the nodes it runs on - active and standby - distributed across more than one geographical site, and the same holds true for any nodes running the database underneath. Without this geographical separation, a single physical site presents a single point of failure, and without the inactive nodes still actively synchronising there will be a loss of data any time the active node becomes unavailable and another has to take over. Cold standby nodes are only appropriate for web sites presenting static data, and those sites are very few and far between.
As you say, it is a specialised field, and robustness is not the result of a single condition. The key point, though, is that there are any number of conditions that could prevent a site from being robust, rather than robustness being a pick-and-mix from a list of possible conditions. Anything at all that can present a single point of failure is a reduction in that robustness, whether it's a reliance on a single server, a single hard disc, a single Internet connection, a single power supply, a single address, a single DNS server, a single network cable or even a single administrator.
105 m2 bungalow in South East England
Mitsubishi Ecodan 8.5 kW air source heat pump
18 x 360W solar panels
1 x 6 kW GroWatt battery and SPH5000 inverter
1 x Myenergi Zappi
1 x VW ID3
Raised beds for home-grown veg and chickens for eggs
"Semper in excretia; sumus solum profundum variat"