• Document: Lustre SMB Gateway. Integrating Lustre with Windows
  • Size: 1010.57 KB
  • Uploaded: 2019-03-14 09:37:41
  • Status: Successfully converted


Some snippets from your converted document:

Lustre SMB Gateway Integrating Lustre with Windows HPC Rendering Cluster Swinburne Hardware: Old vs New Compute Compute 63 x Dell 7910 rack workstation 60 x Dell PowerEdge 1950 - 24 x 2.5Ghz cores, 64GB, 4 x 900GB 10k - 8 x 2.6Ghz cores, 16GB, 500GB Sata, 1GBe SAS, 256GB SSD, 2 x 10Gbe - Win7 x64 - ESXi 5.5u2 Hypervisor & Win7 VM Storage Storage 4 x Dell R630 1 x Dell R510 - 8 x 2.4Ghz cores, 64GB, 4 x 600GB 10k SAS, 2 x 10Gbe, 1 x dualport Mellanox - 12 x 2TB Sata, RAID5, 1GBe ConnectX3 - Centos 5 - 1 x MD3460 Array, 42 x 600GB 10k, 1TB flash - Red Hat EL6 Network Cisco Nexus 2232TM-E Fabric Extender Cisco Nexus 6K 6001 Switches Qlogic 12300 QDR Infiniband switches SCIENCE | TECHNOLOGY | INNOVATION | BUSINESS | DESIGN 2 HPC Rendering Cluster Swinburne SCIENCE | TECHNOLOGY | INNOVATION | BUSINESS | DESIGN 3 HPC Rendering Cluster Swinburne What is CTDB? • Clustered implementation of Trivial Database system • High Availability service for clustered file-system SCIENCE | TECHNOLOGY | INNOVATION | BUSINESS | DESIGN 4 HPC Rendering Cluster Swinburne Why use CTDB? • Compute nodes had to be Windows 7 x64 but no native Windows Lustre client existed • Save costs on Infiniband network hardware • NFS client in Windows is average • Opportunity to leverage existing network infrastructure in the Datacentre • If Windows could do something well.. CIFS/SMB access would be one of them SCIENCE | TECHNOLOGY | INNOVATION | BUSINESS | DESIGN 5 HPC Rendering Cluster Swinburne Differences in CTDB vs SMB CTDB SMB • Many hosts for single file-system • Single SMB host per file-system • CTDB service can manage • No failover – SMB • Less potential bandwidth – NMB – Winbind • Host resiliency inbuilt • Recovery file lock between CTDB hosts • Shared password db SCIENCE | TECHNOLOGY | INNOVATION | BUSINESS | DESIGN 6 HPC Rendering Cluster Swinburne How we implemented CTDB • 2 x Physical nodes • Simple tdbsam password database • Bonded 2 x 10Gbe per host • Single QDR link per host to Lustre • Local config files [smb.conf, public_addresses,nodes, etc] • Shared config / working files [*.tdb, recovery_lock, etc] • Round Robin DNS for public IPs SCIENCE | TECHNOLOGY | INNOVATION | BUSINESS | DESIGN 7 Swinburne HPC Render Cluster Lustre Storage VMware Compute Cluster 12 x Object Servers 63 x Dell 7910 rack workstation 2 x Metadata Servers 10GBe connectivity per host QDR Infiniband Network 1:1 VM to Host mapping CTDB Hosts 4 x Dell R630 2 x 10Gbe in 802.3ad for data 2 x 1Gbe in 802.3ad for heartbeat 1 x Mellanox 2-port Connect

Recently converted files (publicly available):