Recommended Posts

Hey!

I'm getting a new desktop which will basicly be used for LDD + software development, maybe WoW/D3, but mostly LDD + Visual Studio, so I got a couple of questions regarding the configuration.

1. Does LDD benefit from a Geforce Quadro or Ati FirePro graphics cards?

2. Does anyone use LDD spanned over multiple monitors, specificaly, are the gaps between monitors too distracting (even with narrow bezel monitors)?

3. Is it worth to get a middle ranged SSD for the main drive compared to a WD Raptor (10.000 rot/min) HDD.

Share this post


Link to post
Share on other sites

1) I think a openGL professional card will not be helpful with LDD, even considering that LDD uses OpenGL. That's for two reasons: at first LDD is a consumer product, and it seems strange to me it would use special OpenGL extrensions. Also LDD is not an important 3D software, so it don't receive driver optimization for this kind of video card. Probably you will obtain best performances using a consumer video card of the same price, because it has a much powerful GPU.

2) I think it is a personal feeling. Surely the loss of "continuity" could be a little uncomfortable, and probably a single 2560x display could be better in some situations. Otherwise you could adopt an asymmetrical solution using an hight resolution main monitor and one or two standard secondary monitors on the side.

3) There is not comparison between a WD VelociRaptor and a good SSD. SSD is much better.

Velociraptor could be interesting in two situations:

- you need more space, and mechanic disks offers a better $/GB ratio: a 600GB velociraptor costs a little more than a 128GB new generation SSD.

- you make massive write on the disk, and those writing are more sequential than random.

If you need a very fast machine, you could buy an SSD for the system and a velociraptor for archive files you are working on.

Share this post


Link to post
Share on other sites

1. Yea, my thoughts were around the same.

2. My plan is 3x Dell U2412M, put in portrait mode. That makes a 3600 x 1920 resolution. For the last 8 years or so, i was using a 21" + 1 or 2 19" on the sides and the problem was the pixel size (0.27 on 27" and 0.29 on the 19"). That's why i'm going for all same monitors. And the model in mind has a pretty narrow bezel, but it's still noticable. I'll see in time if it's a matter of "getting used to" or just switching to single monitor in normal landscape mode.

3. I got the 300 gb Velociraptor in my current (old core2duo pc) and i was just wondering if i should keep it as system drive or buy a new system SSD and use the Velociraptor for secondary data... I don't need much capacity as i use a NAS for everything else (movies, music, pictures, ...).

Thanks for the answers/opinion, Calabar.

Share this post


Link to post
Share on other sites

You're welcome. :classic:

About the SSD, I suggest you to buy it if your budget allows you the purchase. Advantages are so huge that when you use a PC with an SSD, come back to another PC is really hard! :tongue:

My favourite choice is Crucial M4 with Marvell controller (or Samsung 830... pretty the same disk!) for its stability.

Anyway the next generation is not too far and I think there will be some very interesting solutions in the early summer.

Share this post


Link to post
Share on other sites

I doubt a professional card from either vendor would see any boost in LDD. Unless you have other apps that require a Quadro or FirePro, you are better off with a consumer card. I personally prefer Nvidia, for a few reasons, but the most important one for LDD is being able to force FXAA (Fast Approximate Anti-Aliasing) in OpenGL apps (like LDD). Getting rid of the majority of the jaggies on the edges of the blocks really helps on the eyes.

If I had a triple head setup, I would likely have LDD on the center monitor, instructions (.pdf or images) on another, and probably Bricklink (or some TV show I really don't care about) on the third. Trying to do all that on one monitor is a bit cramped, but I manage. :tongue:

Switch to the SSD. They are amazing. I can't stand to use computers with a standard hard drive now. I agree with Calabar about Crucial/Samsung. They both use the same Marvell controller chip, and it is one of the most stable, especially compared to OCZ's SandForce, which is in nearly all other drives (even some of Intel's) and it is known to be a bit finicky. I have a Crucial C300 (the predecessor to the m4 series) 128 GB, and will likely upgrade to something double that when Crucial comes out with their next wave of releases, which will likely be in the next few months as Marvell just released their newest controller. Really though, the difference in performance between generations of SSD has slowed down so this will mainly be for more space. The major upgrade in performance is in the switch away from a mechanical drive in the first place.

Share this post


Link to post
Share on other sites

I see we agreed for many things, zinfinion, perhaps except the choice of the video card brand.

About anti-aliasing, my experience is that it is not possible to enable it with advanced graphics in LDD, but I tried with MSAA only.

About FXAA, it is an Nvidia technology, but it is supported bu AMD cards too, but it was not possible to force it by drivers (I don't know if recent drivers allows that now). AMD has a similar technology, MLAA, but it is slower.

Now nvidia has introduced TXAA that seems very interesting, but I'd like to see real result before judging it.

@Bojan

Obviously if you want some suggestion about the rest of the computer, you can ask here! :wink:

Share this post


Link to post
Share on other sites

With regards to forcing Nvidia's FXAA in OpenGL, it's still experimental and has to be toggled using Nvidia Inspector, and only works in the 290.53 drivers (that I am aware of). The two most recent 200 series driver releases broke it (along with other things, Nvidia is probably distracted with the new 680 cards), so it's still very much an unofficial thing.

Regarding AA in general, LDD is definitely doing something that standard AA methods don't seem to work with. I tried all the options, and basically lucked out with FXAA being the one thing that worked. There may be a way to enable AA in LDD on AMD, but not having one of their cards, I have no way of knowing for certain. Obviously the optimal solution would be for the LDD devs to build anti-aliasing support into the app itself, but until then I hope both Nvidia and AMD can come up with easy to implement workarounds.

As far as TXAA, I'm intrigued, but also a bit dismayed since it requires the developers of the software to enable it. But they said that about FXAA as well, and now there are ways to inject that as well as SMAA into most DirectX apps (who am I kidding, I mean games :tongue: ). All in all it's a very exciting time for AA as well as general image quality. After years of practically no improvements to AA (especially on the memory and workload fronts), all of a sudden there are more than 4 new methods, and probably more that we haven't heard of yet. Timothy Lottes, pretty much the guy for FXAA and TXAA has some insightful details on his site.

Share this post


Link to post
Share on other sites

@Calabar

Ok, here's the configuration, feel free to tear it apart :) :

- Gigabyte GA-Z68MA-D2H-B3 (maybe GA-Z68X-UD3P-B3, but tbh, i don't see a 50€ advantage...)

- Intel i7 2600

- 2 x 8 GB DDR3 (Corsair Vengeance or something else... 16000 MHz)

- Sapphire or Gigabyte Radeon HD 7850 2 GB OC

- was thinking of a OCZ Vertex 3 120 GB but could go with a Samsung 830 series 128 GB or Intel 520 series 120 GB (not much price difference)

- 3x Dell UltraSharp U2412M

- some 600-700W power supply

- keeping the old Thermaltake Tsunami case, WD Velociraptor 300GB, WD Black 1GB (for secondary and tertiary hdd), MS Natural keyboard and Logitech MX-518 mouse

Price limit is 1800€ and this config is exactly that here in Slovenia.

I do have second thoughts about the proc thou, dunno if i should wait for ivy bridge to come out.

Computer will be primarily used for Visual Studio and LDD (programming as my main job & Lego as my main "free time job"). I'll play Diablo 3 when it comes out (and maybe some World of Warcraft), no other games.

And it should last next, let's say... 3 years at least.

p.s.:

I am thinking of a Nvidia card because i'd like to learn CUDA, but i think OpenCL will be ok too with an AMD/Ati card...

And i have no idea about all that TXAA, FXAA, SMAA, and all other ??AAs out there as i've never used them.

Edited by Bojan Pavsic

Share this post


Link to post
Share on other sites

Nice spec Bojan :cry_happy:

I'm waiting for Ivy Bridge myself. I was ready to buy a new system this fall with Bulldozer, but since that didn't add much performance improvement, I thought I'd wait for Ivy.

I'll defintely go with a SSD too.

Share this post


Link to post
Share on other sites

Without going into to much technomumblejumble regarding all the new anti-aliasing methods, around 4+ years ago (mainly due to the Unreal Engine 3) a lot of games switch to deferred shading for lights which is mostly incompatible with the established multisampling anti-aliasing (MSAA) that has been around for years. There were ways around it, but they were very memory and GPU intensive, so not all that helpful unless you had a beast of a card. So the GPU devs started working on anti-aliasing shaders that would run after the lighting passes to remedy this. AMD was first with MLAA for AMD GPUs, then Nvidia with FXAA for all GPUs, then SMAA was done by a third party for all GPUs, then there was talk of Nvidia's SRAA which seems to have been pushed aside, and now there is Nvidia's TXAA, which will need to be enabled by developers in their games/apps.

Long story short, they are a new way of doing anti-aliasing that is more compatible with modern game engine rendering techniques, TXAA will be the most compatible thus far it seems, especially when HDR and things are present. The other upside is they are faster than MSAA, work on alpha transparency, use less VRAM, etc... Nvidia's 300 series drivers (for the GTX 680 only currently) allow forcing FXAA on DirectX games in the control panel, I am uncertain how/if it is implemented for OpenGL in these specific drivers, as I don't have a 680 (yet! :tongue: ). I am also unsure if MLAA on AMD cards can be forced on OpenGL as well.

So mainly, it's for games, I tend to use SMAA on games as it has the best image quality. The FXAA in the 300 drivers is a newer revision than the current injectors, so it might have a better quality as well.

As far as LDD goes, it's hard to really capture the difference anti-aliasing makes with screenshots, it's mainly noticeable when rotating and zooming in and out. It's not a must have, but it is a bit easier on the eyes, at least for me.

As far as your build goes, unless you plan to do a lot of virtualization or video encoding or other things that really benefit from having the four hyperthreads of the 2600, you could probably do better with the 2500k and overclock it. I have mine running at 4.5Ghz using the stock Intel cooler. Everything else looks fine. I use exclusively Corsair power supplies, they are definitely one of the best brands. And as far as Ivy Bridge, form what I have seen there is practically no increase in speed or anything else compared to Sandy Bridge.

Share this post


Link to post
Share on other sites

And as far as Ivy Bridge, form what I have seen there is practically no increase in speed or anything else compared to Sandy Bridge.

That's interesting news. There's a lot of hype around it for sure. In any case, I'm a "silenct pc" nut, and I'm looking forard to the reduced power consumption of the Ivy Bridge (from what I've read that is).

Share this post


Link to post
Share on other sites

A fast and not exaustive answer.

- 1800€ seems to me too much at a first glance. But I need to check every piece before say a final word. EDIT: I forgot to count the 3 monitors... :grin:

- as you have choose the mainstream Intel platform, surely it would be better to wait the new Ivy Bridge cpus.

- HD7850 is a very good mid-range card, but it is a bit overpriced now. Probably it should be better to wait some weeks, the release of the new Nvidia cards hopefully will lower the price of the new AMD cards. Unfortunately at the moment Nvidia don't offer a new card in that range.

About GpGPU, the new nvidia are not as good as the old Fermi (5xx), while the new AMD generation cards are very good for that.

- The power supply is one of the most important part (the "heart") of your computer, choose it with care.

- About SSD, remember that the intel 520 has a sandforce controller (but with a custom firmware). It is good and very fast, but at this round I think Marvell controller is better. You could think even to take the SSD later when the 4th generation will come out.

Share this post


Link to post
Share on other sites

@zinfinion

Yea, you guessed it... I do some virtualization (i need to test software in different environments, so i tend to run 2 or 3 VMs in addition to host OS). So i7 2600 is final. I might even consider "scrapping" one monitor for a additional 250€ for the graphics to maybe get from 260 to 510€ for the card... I can always buy the third monitor later, changing the graphics is not that "easy".

@superkalle

Well, i have to buy a new one... have quite some problems with my current desktop pc.. I do have a Dell M6600 notebook which i use when the desktop doesn't want to cooperate, but i'd still like a desktop with multiple monitors.

Share this post


Link to post
Share on other sites

That's interesting news. There's a lot of hype around it for sure. In any case, I'm a "silenct pc" nut, and I'm looking forard to the reduced power consumption of the Ivy Bridge (from what I've read that is).

I really need to add caveats to my statements. :tongue: I mainly look at CPUs/GPUs in terms of gaming performance, and obviously a die shrink of a CPU wont gain much performance there since almost all games are GPU bound. It's quite possible that it does better in other areas, but I'm not expecting more than a 5-10% increase (wild speculation) in most things. The lower TDP and integrated USB 3.0 are nice additions that I would most certainly wait for if I were upgrading from a previous generation of CPU/motherboard.

Yea, you guessed it... I do some virtualization (i need to test software in different environments, so i tend to run 2 or 3 VMs in addition to host OS). So i7 2600 is final.

Be sure to look into the K model of the 2600 (or the equivalent Ivy Bridge) that can be overclocked. All it takes is changing one number in the BIOS, and it is super simple. I'm at a 36% overclock, and could easily go higher with better cooling.

Share this post


Link to post
Share on other sites

@zinfinion

Not really necessary... I wont overclock. Never did, never will, everything will be running at stock speed. I won't even think to bother with some extra cooling stuff. At one time i had a P4-3.0 prescott... It didn't even run with the stock cooler at stock speed and I had to buy some heatpipe cooling "tower" that was almost half kg in weight and basicly put a lot of stress on the motherboard. That was the first and last time i ever bothered with cooling so i don't really care about performance that much as long as it's rock solid. With that proc i even had to had the case opened all the time. So i guess i'll wait for ivy bridge procs to come out and get that.

My primary goal is to have a working computer that i don't need to test for a day with prime95 or similiar running at full, just to see if it's stable enough under stress.

@zinfinion

Not really necessary... I wont overclock. Never did, never will, everything will be running at stock speed. I won't even think to bother with some extra cooling stuff. At one time i had a P4-3.0 prescott... It didn't even run with the stock cooler at stock speed and I had to buy some heatpipe cooling "tower" that was almost half kg in weight and basicly put a lot of stress on the motherboard. That was the first and last time i ever bothered with cooling so i don't really care about performance that much as long as it's rock solid. With that proc i even had to had the case opened all the time. So i guess i'll wait for ivy bridge procs to come out and get that.

My primary goal is to have a working computer that i don't need to test for a day with prime95 or similiar running at full, just to see if it's stable enough under stress.

Share this post


Link to post
Share on other sites

Bojan Pavsic,

As someone who uses a workstation graphics card, I figured I should chime in and offer my two bricks on the matter. But first let me clarify that my computer is nothing age wise in comparison to what has been discussed here so far. But hopefully soon I can upgrade to the newest generation.

LDD was and is (do not quote me on this as I am unsure if this is indeed still the case) created with Qube Software's OpenGL based graphics engine which allowed for the ease of cross platform use. The problem is that LDD has gone through several changes and its internal functionality no longer represents the original form. The main data archive: Asssets.lif which I do not recognize. It has been quite a while since I used the Qubesoft development kit to poke through the original database file. However I cannot remember the extension name of the original so I cannot say if it is the same or not,

LDD currently follows a trend I have become aware of where the application relies heavily on CPU output and available memory. For example, those in manufacturing will be familiar with the 3D Computer Aided Drawing application Solid Works. The general thought would be the better the workstation graphics card you run it with the better. Unfortunately this is not the case as the application belongs to the category of heavy CPU and memory dependent. There are some benefits for having a better card, but in this case the emphasis should be on CPU and memory available. The one exception is the advanced OpenGL shaders used by most of it not all of the 3D CAD systems that allow for real time life-like texturing. Solid Works does use this if it is present. Again this is not by any means a Direct 3D feature. On the other end of the spectrum are applications like Autodesk Maya for 3D content creation which will put to full use whatever graphics card you can run it with. Obviously that means that the better it is the bigger the project you can tackle. Another example is LDview which also benefits greatly from having a better graphics card.

That side note aside, for reference I run LDD using a 2.2GHz single core processor, 1Gb DDR2 and a Nvidia Quadro FX 1400 which runs a single 19" 1280x1024 monitor. For my building needs I run LDD without advanced shaders enabled, but if I so desire, I will enable them for taking a snapshot. I found that by enabling them dramatically increases the CPU and memory load in comparison to running it without them enabled. The added load reduces my frame rate down to a mere several frames a second. It would appear that any intial benift that the application would have or could have had was lost as more window's based components were integrated into the application.

As a side note, I aluted to this above that any OpenGL specific program I run is a different story as it can put the graphics card to work and get the full output. The card may not be blazing fast, but it will handle whatever is thrown its way and keep on moving. That is the difference between consumer and workstation grade graphics cards. Where the consumer cards would choke on the polygons and shader levels present, the workstation cards keep moving along as if nothing happened. Then again game specific OpenGL graphics are a different story as they tend not to be coded for taking advantage of the workstation card's power and actually run slower on them.

So I echo the remarks of those who have already posted in that unless you have the need of workstation level graphics, then stick with the consumer level (or enthusiast level if you are a gamer) graphics cards as that should more then suffice your needs.

Example screenshot

I hope this helps to clear things up.

3D LEGO

Share this post


Link to post
Share on other sites

As i wrote, i have a notebook (dell M6600 precision, i7 / 16 gb / ssd + hhd / Quadro 3000M 2GB graphics), but i don't have anything similar with a consumer graphics card to compare with, so i can't really tell if it's better or worse. Since LDD doesn't support multithreading, i7 or i5 doesn't really matter, but i need more thread capable processor for other things, so i7 is the clear winner.

On my current desktop computer, i usually use LDD with advanced graphics on (edges on, lowest quality), unless the model gets over 2000 or so bricks. I do try to compensate with groups & hiding, but it's a bit annoying since LDD doesn't support unhiding specific group.

My current desktop config is E7200, 2GB memory, RadeonHD 3870 512MB, wd velociraptor and it would work for some more time, but i'm getting problems that from time to time, graphics drivers "resets". Screen goes blank for 10-15 secs, then comes back with a message that "windows recovered from a graphics failure" or "graphics driver failure" or something similar. Everything works, except LDD. This happens like once in a week (the computer is 24/7 on). I lost quite some hours of LDD work because of that. Everyone will probably say it's the graphics card, but i'm pretty sure it's the mainboard, since i had problems with it from start (Windows 7 not working with my previous graphics card nvidia 7600GT), LDD problems in XP with my previous graphics card etc... It's a "combo" mainboard Asrock 4CoreDual-Sata2, it has AGP(8x) and PCI-E(4x) slot, DDR and DDR2 memory etc... I bought it to have a cheaper upgrade from P4 proc to core2duo, keeping my old DDR memory & AGP graphics. With time i switched the memory to DDR2 and graphics to PCI-E card, but there were always some problems... And now an opportunity opened to get a new pc (some freelance afternoon/weekend job, instead of getting payed, i'd get a computer), so i took it.

Well, my point was, that on your computer, it's probably the memory that's holding LDD back. The usage gets to half gig quite fast and with you having only 1GB, it's probably the main bottleneck. I even noticed that i get to fill up my 2GB pretty fast.

There was quite a big difference between LDD 3.x and 4.x (with all the new shaders and additional geometry data).

It's time to move on anyways. I'll just probably wait for ivy bridge to come out and buy then. Might even be some changes on the SSD "market".

Share this post


Link to post
Share on other sites

I just replaced an older video card in my main rig with dual GTX 580.

I saw very little improvement in overall performance in LDD. It still starts to stutters at over 15k bricks and becomes very slow at over 20k. I used set 10179 UCS Millenium Falcon posted in official LDD project list, then copied and pasted the ship multiple times. At 4th ship (20k total bricks) it was slow to position and place it even on empty spot. At 5th MF (26k total bricks), it'd take me 3-4 minutes before I could place it. I gave up waiting at the 6th MF (31k bricks total)

I did notice bigger improvement in overall performance when I overclocked the CPU from stock 2.6GHz to 4GHz. However overclocking is not for everyone.

Now if only LDD could support multi core CPU, then my main rig could really fly. After all dual Xeon X5650 has a grand total of 24 processes running at 4GHz. A full multicore support would mean theoretical project or over 350,000 pieces before I start to lose speed.

Share this post


Link to post
Share on other sites

LDD is already multithreaded, as can be seen in Task Manager. However I don't think the algorithms they use scale well beyond about 15K pieces. LDD on my Quad-Core i7 starts to struggle around then, despite the fact that the machine is barely feeling the strain. I suspect it's calculating the connection points that kills it, as you see similar problems moving moderately large constructions over each other.

There's a lot more to getting perf on modern systems than just "use multicore" and I don't think LDD is really designed to scale to that size (it was, after all, meant for DbM models).

Share this post


Link to post
Share on other sites

A little update:

I just bought:

Desktop:

- MB Asus Sabertooth Z77

- Intel Core i7 3770K

- Corsair Vengeance 16GB cl9

- Gigabyte Radeon HD 7950 3GB WindForce 3x

- PSU XFX Core Edition Pro 850W

---------------------------------------------

Notebook (Dell XPS L702x):

- Intel Core i7 2670 QM

- Memory 8GB DDR3

- HDD 750 GB / 7200

- nVidia GeForce GT 555M 3GB

- 17,3" @ 1920x1080

----------------------------------------------

SSD:

- OCZ Vertex 4 - 256 GB

- OCZ Vertex 4 - 128 GB

I'm gonna either put 128 GB SSD into the notebook additionally to the HDD and the 256 GB into the desktop or put the 256 GB SSD into the notebook and put the notebook HDD & 128 GB SSD into the desktop... Didn't really decide yet.

For now i tried LDD on the notebook (and spent 3h figuring out why High quality rendering does't work until i found out the option to choose which graphics processor to use for a specific program - the internal i7 HD graphics or the nVidia...) and it looks great. LDD used all 8 threads at least some of the time, so i can confirm it is at least partly multithreaded. I opened my train station with 8k bricks on maxed out Advanced Shading and it worked flawlessly (on my old desktop i had like 1fps on min Advanced Shading with outlines). The sky scraper with 20k bricks works smooth too with min Advanced Shading (thou moving large parts "kills" the frame rate considerably - because of calculating connections...).

This is with the built in HDD.

I hope i have time time this weekend to install the SSDs, put the desktop together, reinstall windows on both etc...

Share this post


Link to post
Share on other sites

HI

I am new to Lego (started 12 months ago). Now as I wish to start doing some designs I was wondering what the optimum hardware set up would be for installing LDD. As I am not a computer buff I asked a friend , who is but not into Lego, to suggest a set up. His suggestion is as follows:

CPU: Intel Core i7 4770 Quad Core LGA 1150 3.4GHz CPU Processor -

Motherboards: Gigabyte GA-Z87-D3HP LGA 1150 Motherboard

Memory: Kingston HyperX blu 8GB (2x 4GB) DDR3 1600MHz Memory KHX1600C9D3B1K2/8GX

Video Card: Gigabyte GeForce GTX 770 2GB OC Edition Video Card

Power Supply: Thermaltake SMART 650W 80Plus Bronze Power Supply

Storage (SSD & HDD): Samsung 840 EVO 120GB 2.5" SATA III SSD MZ-7TE120BW

Storage (SSD & HDD): Seagate ST1000DM003 1TB Barracuda 3.5” 7200RPM SATA3 Hard Drive

Optical Drive: Samsung SH-224DB (BB) 24x Internal SATA DVD OEM Burner Drive

Software: Operating System:Microsoft Windows 8.1 - 64-bit - DVD OEM

Advanced Build Options: Upgraded CPU Cooler Installation

I would appreciate to have some of your expert opinions/suggestions on the above. The approx. cost of the above is AUD1600.00

Share this post


Link to post
Share on other sites

That would be much more than you need for LDD. This is some kind of game-machine that a lot of gamers would dream about :laugh:

Of course, I don't know what other things you want to do with your pc, but this is good enough for LDD for sure. You would be able to handle quite some big files with that :wink:

Share this post


Link to post
Share on other sites

@johnboy

You are about to spend over 1,5k AUD for a PC, why economize buying a middle-range SSD and only 8GB of RAM?

About the storage, maybe a Caviar Red or a Segate Video could be a better solution (slower, but certified for 24/7, silent and low power consumption).

As legolijntje says, it is a good game machine, oversized for the use with LDD.

Share this post


Link to post
Share on other sites

The Samsung 840 EVO is the cheapest and one of the best SSD at the moment, you should not buy another one, unless you're a super-user.

Btw, I have an 840 EVO 500GB myself since 3 days.

You're right about the RAM, 8 or 12 gb would be good too. And it's not only the amount that counts, it's also the clock speed, something like 1200 or 1600 mHz would be nice, above that and you won't really notice a difference in everyday use (and LDD).

Share this post


Link to post
Share on other sites

The 840 is a good price/performance SSD, but it has its flaws and it uses TLC memories. As we are speaking about a 120/128 GB size, the 840 Pro (or another similar product) offers a better solution with a limited additional cost.

About the RAM, frequencies and timings importance is getting smaller, except than in specific situations (for example with many AMD APU, where the memory controller need to supply the GPU, that is hungry of bandwidth).

I think that the 1600Mhz ram he choose are good for his system, and it seems you agreed with me.

Another thing.

The Gigabyte GA-Z87-D3HP LGA uses a Z87 chipset, that will be not compatible with the next die shrink of this generation of Intel processors.

We are very close to the release of a refresh version of haswell processors together with the series 9 chipsets.

Maybe might be worth to wait for this update.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Recently Browsing   0 members

    No registered users viewing this page.