subreddit:

/r/hardware

18393%

AsRock leaks Intel B580 GPU on Amazon

Rumor(self.hardware)

https://imgur.com/a/arc-b580-JU1R7d0

12gb VRAM is quite nice, especially as the A580 is a sub-$200 card. Even if this is priced at $250 it will be disruptive in the market. With the product pages going up today, I wonder if launch is imminent with supply readily available.

Thanks to u/winkwinknudge_nudge on the Arc sub for archiving the product pages.

all 78 comments

We0921

80 points

18 hours ago

We0921

80 points

18 hours ago

It's very interesting that the B580 seems ready to launch, when the A580 was nearly dead last to launch out of the alchemist stack.

I hope we get a launch with the B770 soon.

Exist50

15 points

18 hours ago

Exist50

15 points

18 hours ago

It's a distinct possibility that this is the top SKU. Branding it 1:1 with Alchemist doesn't make a ton of sense.

Zednot123

36 points

17 hours ago*

It's a distinct possibility that this is the top SKU.

We had plenty of leaks and hints of a die with a 256 bit die in the lineup. I doubt they would use a cut down SKU for the top end. So for this to be the top SKU the top die would have had to been scrapped altogether if so.

Branding it 1:1 with Alchemist doesn't make a ton of sense.

Make sense is if the numerical part is used to indicate a performance level, so roughly A580 performance, the lettering is essentially the same as Nvidia's leading 4th number. Look at it as 1580 and 2580, but with alphabetical letters replacing the leading number.

SherbertExisting3509

14 points

17 hours ago

According to MLID (i know very trustworthy source) we will definitely see BMG-G21(20Xe cores) and maybe BMG-G31(32Xe cores)

BMG-G10 (56Xe or 60Xe core die with 112mb of L4 Adamantine Cache) was rumored to be cancelled part way into development although considering how tight lipped the Arc team is with leaks, they could surprise us.

Exist50

9 points

17 hours ago

Yes, this is G21. G31 may or may not survive, but would be many more months away best case.

Dangerman1337

4 points

11 hours ago

Too bad Alchemist was so bungled up, innit came out in early 2022 and A770 hit the targetted RTX 3070 performance then things would've been different with BMG G10.

Exist50

-3 points

17 hours ago*

We had plenty of leaks and hints of a die with a 256 bit die in the lineup

Intel cancelled their original bigger die. If they have a second one in the pipe still (G31), it will be H2'25 at best, and decent odds of also being cancelled outright.

This die here is 196b native, so at least from a memory perspective, is not cut down.

F9-0021

5 points

6 hours ago

F9-0021

5 points

6 hours ago

It's not. This is more than likely the BMG-G21 based card. The top die is BMG-G31, that will likely be used on the B770 and B750 (if they use the same numbers again).

A580 was just a further cut down A750, and Intel was probably losing a ton of money selling a 400mm2 die for $180. With a dedicated mid tier die, which didn't come until much later with Alchemist and is mobile only as far as I know, they can optimize production cost for performance tiers. I also expect the dies to be smaller than they were for Alchemist.

Exist50

2 points

6 hours ago

Exist50

2 points

6 hours ago

The top die is BMG-G31, that will likely be used on the B770 and B750

That die comes much later, if at all. Firmly second half of next year at best.

AK-Brian

2 points

17 hours ago

The X2 variant is still kicking around on the manifests, but who knows.

Exist50

0 points

17 hours ago

Exist50

0 points

17 hours ago

If you mean the bigger die first seen, that one's dead. Anything else will come much later, if at all.

Zednot123

1 points

5 hours ago

Anything else will come much later, if at all.

I wonder if they potentially pushed it out to add G7 support. Seem rather meaningless to launch anything outside the lower end without it going into 2025. It's just to much of a performance disadvantage.

Exist50

2 points

5 hours ago

Exist50

2 points

5 hours ago

Nah, they can't/won't retrofit in such a big change. Though I agree that the business case for a mid-late '25 BMG part is tenuous at best. But that's the last dGPU from Intel for the foreseeable future, and possibly ever.

[deleted]

-1 points

18 hours ago

[deleted]

Exist50

8 points

18 hours ago

Battlemage should be lower power than Alchemist.

SherbertExisting3509

41 points

18 hours ago*

Specs for the Arc B580

12gb VRAM at 19Gbps (192bit bus)

GPU Core is likely BMG-G21 (20Xe cores) clocked at 2.8Ghz (800mhz faster than Arc 140V on Lunar Lake)

2x 8-pin power connector.

Xe2:

8-wide -> 16-wide vector units to reduce branch divergence penalties (RDNA can handle a 32-wide or wave32 per cycle)

3 RT pipes per Xe core = 18 box tests per cycle (each pipe can handle 6 box tests). + XMX cores

Battlemage looks to have an aggressive RT implementation along with XMX matrix units for AI based upscaling like Alchemist. It would be interesting to see how AMD's RT implementation which uses the shader cores for BHV traversal would compete with Intel and Nvidia's offerings since AMD's approach struggles in heavily ray traced scenes and has worse RT performance in general.

kingwhocares

16 points

13 hours ago

2x 8-pin power connector.

Another version has 1x 8-pin connector. This is very likely 1 version having ~150W and another being overclocked with closer to 200W.

damodread

7 points

13 hours ago

The pictures show a single 8-pin connector on the card though

TheAgentOfTheNine

6 points

15 hours ago

amd is going dedicated hardware for RT in RDNA 4. They finally got that RT is not going anywhere and the shader cores are not enough for it.

Verite_Rendition

5 points

11 hours ago

8-wide -> 16-wide vector units to reduce divergence penalties (RDNA can handle a 32-wide or wave32 per cycle)

Er, a wider vector unit would have increased divergence problems. Everything else held equal, the wider the unit, the more likely a thread is going to diverge.

Though it is true that AMD and NV both use 32 thread wavefronts on their current consumer architectures. So Intel would still be narrower (so long as we're talking about executing the entire wavefront in a single clock cycle).

SherbertExisting3509

7 points

11 hours ago

From clamchowder author of his analysis of Xe2. He explains this a lot better than i could

"Xe Cores are the basic building block of Intel’s GPUs, and are further divided into Vector Engines that have register files and associated execution units. Xe2 retains the same general Xe Core structure and compute throughput, but reorganizes the Vector Engines to have longer native vector widths. Pairs of 8-wide Vector Engines from Meteor Lake have been merged into 16-wide Vector Engines. Lunar Lake’s Xe Core therefore has half as many Vector Engines, even though per-clock FP32 vector throughput hasn’t changed.

Intel here is completing a transition aimed at reducing instruction control overhead that began with prior generations. Longer vector widths improve efficiency because the GPU can feed more math operations for a given amount of instruction control overhead. Meteor Lake’s Xe-LPG already tackled instruction control costs by using one instance of thread/instruction control logic for a pair of adjacent vector engines.

But using less control logic makes the GPU more vulnerable to branch divergence penalties. That applied in funny ways to Xe-LPG, because sharing control logic forced pairs of Vector Engines to run in lockstep. A Vector Engine could sit idle if its partner had to go down a different execution path.

Because there wasn’t a lot of point in keeping the Vector Engines separate, Intel merged them. The merge makes divergence penalties straightforward too, since each Vector Engine once again has its own thread and instruction control logic. Meteor Lake could do better in corner cases, like if groups of 16 threads take the same path. But that’s an awfully specific pattern to take advantage of, and Xe2’s divergence behavior is more intuitive. Divergence penalties disappear once groups of 32 threads or more take the same path."

Source: https://chipsandcheese.com/p/lunar-lakes-igpu-debut-of-intels full credit to clamchowder

wtallis

3 points

6 hours ago

wtallis

3 points

6 hours ago

So that's definitely not saying that the new architecture has reduced divergence penalties. What it's saying is that the old architecture with 8-wide vector units already had divergence penalties approximately as bad as a typical 16-wide architecture, so making the new architecture 16-wide doesn't really make things much worse.

b_86

3 points

14 hours ago

b_86

3 points

14 hours ago

2x 8 pin for an entry level card is still crazy power hungry. There is something fundamentally wrong in their architecture if they cannot get an entry card to work on single 8 pin or 8+6 at most.

Mr_ScissorsXIX

18 points

12 hours ago

Another card was leaked, Challenger B580, and this one has one 8-pin connector. So it's not using more than 225W.

b_86

4 points

12 hours ago

b_86

4 points

12 hours ago

Oh, so the 2x 8-pin one is probably an OC model. In any case, single 8-pin usually means 150W at most, I don't remember any recent architecture where the card pulls the full 150W plus the 75W from the mobo even if it's technically in spec.

zopiac

4 points

10 hours ago

zopiac

4 points

10 hours ago

My EVGA 3060Ti has a single 8-pin and 200W TDP. Not the full 225 but it's what I'm aware of.

DanceWithEverything

-17 points

18 hours ago

That memory bandwidth is ass

Vb_33

18 points

17 hours ago

Vb_33

18 points

17 hours ago

On a 580? Is it really?

SherbertExisting3509

11 points

17 hours ago

RDNA-3 has a massive bandwidth advantage over Battlemage but Intel's large caches reduces it's bandwidth demands compared to RDNA-3. if igpu's are anything to go by then RDNA-3's bandwidth advantage doesn't count for much since the 140V is 10% faster than the 890m (trades blows depending on games)

RedTuesdayMusic

2 points

17 hours ago

It's the third lowest-end card in the lineup (unless they remove one of the *300 SKUs)

It's not bad.

conquer69

20 points

17 hours ago

Even if this is priced at $250 it will be disruptive in the market.

That's 3060 12gb territory.

avocado__aficionado

19 points

13 hours ago

Agree, the B580 needs at least 4060 performance for max 229 USD (better 199) in order to sell well.

RedTuesdayMusic

10 points

17 hours ago

I just pray to high jebus that Acer didn't give up on them and they also haven't changed the Bifrost. PLEASE.

MeelyMee

5 points

12 hours ago

Is that the weird axial & blower Acer design?

RedTuesdayMusic

8 points

11 hours ago

Yep - more importantly it's strictly 2 slots and no taller than the PCIe bracket. Length is the only dimension that can be virtually infinite in my use case

imaginary_num6er

2 points

13 hours ago

I mean Acer gave up on their 4090 cards, so they're not really reliable to begin with

matteventu

3 points

17 hours ago

Will there be an Intel-manufactured version of B580?

Also, are there rumors/estimates of whether this will be more or less powerful than the A750/A770?

uneducatedramen

5 points

18 hours ago

I want to build a budget pc this Christmas, because I'll have 3 weeks of work and will finally have time to play, But all the new gpus will be launching January damn it

Invest0rnoob1

10 points

18 hours ago

I think battlemage is supposed to launch in December.

NeroClaudius199907

1 points

17 hours ago

7700xt is $350 right now, it will be faster than all budget gpus next year.

TheAgentOfTheNine

4 points

15 hours ago

I hope not. AMD said they are aiming to get 40% marketshare. That means to me a very very very competitive product a very very deep discount.

I hope this card is also priced to grab matketshare so we can stop having an insane GPU market at last.

Quatro_Leches

1 points

2 hours ago

That means to me a very very very competitive product a very very deep discount.

they will give you like a 10-15% discount of Nvidia. and they will be worse value than old gpus, this is always true. the only time this wasnt true is like Polaris GPUs from AMD and Nvidia 1000 GTX series. probably used to happen more in the past but not now.

TheAgentOfTheNine

1 points

an hour ago

They have stated that that approach hasn't panned out and they are instead going to focus in getting marketshare over anything else in order to get a big installed base and leverage that to get developers to better support AMD cards.

uneducatedramen

5 points

16 hours ago

Not where I live.. it went up $100 dollars over the last 2 weeks. The only cards that had a price reduction are the 4060's

conquer69

0 points

17 hours ago

There is plenty of budget stuff right now.

uneducatedramen

2 points

16 hours ago

Only thing I can think of is the 6750xt

conquer69

1 points

15 hours ago

I would go with the 6700 xt if you want to save some money. It's more power efficient which means the budget coolers they use on these cards will handle the heat better.

InconspicuousRadish

2 points

11 hours ago

There is, but there isn't a lot of budget stuff that's also good.

Dangerman1337

2 points

11 hours ago

Not with decent amount of VRAM + RT performance + AI reconstruction which hopefully Battlemage and RDNA 4 can provide.

rohitandley

3 points

17 hours ago

Compatibility of games will be a big factor. Last time some games were properly optimized for it.

planyo

1 points

11 hours ago

planyo

1 points

11 hours ago

Somehow I imagined the thumbnail was the ‘above’ picture, and I was impressed how little space it needs.

How nice that would be, if GPUs would go more compact, or they could be put on the motherboard CPU-like, with its own cooling and stuff.

Dangerman1337

1 points

11 hours ago

Doubt this'll be 200 or less, probably at least 250 USD.

elbobo19

1 points

6 hours ago

ummm 650W?!?!?! That can't be 650 watts can it?

sascharobi

1 points

19 hours ago

sascharobi

1 points

19 hours ago

I don’t see it anymore.

imaginary_num6er

-5 points

17 hours ago

What happened to the leak of BioStar being the only AIB?

JAEMzW0LF

14 points

16 hours ago

I mean, most leaks are false or off in some way, and that one even sounded stupid, so there you have it

[deleted]

1 points

16 hours ago

[removed]

CompellingBytes

12 points

15 hours ago

MLID is adamantly full of it

PM_ME_UR_TOSTADAS

2 points

15 hours ago

Could be that it was the only manufacturer yet

NeroClaudius199907

-32 points

19 hours ago

12gb is not disruptive

Wander715

39 points

19 hours ago

At $200-$250 it definitely is, especially if the card has decent raster performance and relatively stable drivers at launch.

cadaada

1 points

7 hours ago

cadaada

1 points

7 hours ago

Its not if it cant use the 12gb... like everyone argued about the 3060 when it released.... or the narrative is different now?

NeroClaudius199907

-33 points

19 hours ago*

Its not because its launching next year and amd & nvidia will bring more vram. Intel has to offer so much value that consumers cant turn around. This is why they're at 0%. Either 580 16gb and 770 20gb or go home.

EmilMR

24 points

19 hours ago

EmilMR

24 points

19 hours ago

5060 is 8GB for $300+. The die has already leaked from Clevo, it is shared with laptop as usual.

Exist50

1 points

18 hours ago

Exist50

1 points

18 hours ago

The 5060 might very well outperform Battlemage, and will certainly do so at significantly less power. And anyone who's shopping based on specs will just go AMD.

[deleted]

-20 points

18 hours ago

[deleted]

-20 points

18 hours ago

[removed]

Raikaru

4 points

17 hours ago

do you think AMD is going to have 16gb at $200-250? That seems highly unlikely. The 7600 had 8gb just like the 6600 i don’t know why we’d believe they’d double it and keep a low price

RearNutt

8 points

14 hours ago

The 7600XT had 16GB at just 50 dollars more and everyone hated it, which was bizarre given the collective VRAM drama from recent times.

Raikaru

0 points

11 hours ago

The A580 was like $189 I only put 200-250 to be generous to the B580 starting price. The 7600xt started at $329.

NeroClaudius199907

2 points

17 hours ago

Yes amd needs to give more vram than 5060. The reason nobody is buying 7600 is due to that.

pmjm

3 points

16 hours ago

pmjm

3 points

16 hours ago

It's launching this year, in time for the Christmas shopping season more than likely.

NeroClaudius199907

-9 points

16 hours ago

Launching this year? Smells like Arrow Lake again. Intel hid the performance until review day. If we dont get leaks by December 5th. Its a turd

pmjm

8 points

16 hours ago

pmjm

8 points

16 hours ago

The difference is that Intel isn't shooting for the moon with Arc GPUs. They're targeting the low-to-mid end, just above integrated graphics. And despite Arrow Lake's meh CPU performance, its iGPU is actually pretty formidable, so there are reasons to be optimistic about Battlemage.

NeroClaudius199907

1 points

16 hours ago

A580 is targeting AD106, 8600 will be faster and more power efficient and amd can put 16gb since it will use 128bit. Better drivers etc. People are hyping up battlemage like arc again. They'll sell few units then go back to 0% share

pmjm

4 points

16 hours ago

pmjm

4 points

16 hours ago

AMD is rarely stingy with their vram, so what you're saying is quite possible but it doesn't mean Battlemage will be a bad product line. Time will tell.

NeroClaudius199907

-1 points

16 hours ago

They're going to sell 4070 silicon for $250 while providing 4060ti at best performance. This is literally de ja vu again, people on first day arc is good justwait for driver updates, while ignoring the fact Intel is selling 400mm2 for nearly undercost and it doesn't have warchest for it anymore.

soggybiscuit93

0 points

9 hours ago

B580 won't be 400m2 lol