Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Table of Contents
stylenone

Purpose

The purpose of this form is to request the creation of a shared storage space on the Ceph cluster. The Ceph cluster is a highly scalable and reliable storage solution designed to handle large amounts of data efficiently. It provides a robust platform for data storage and management, ensuring high availability and redundancy.

Required Information

To request a share on the Ceph cluster, please provide the following information:

Share Name: Ensure there are no spaces in the name (example: share_name)
Administrators: IT Manager, IT Help Desk Agents
Owner: The person responsible for providing approval for group modifications
Users: List the users who will have access to the share (Full Name and Email address)
Other Information (Optional): Include any specific details such as folder structure, restricted folders, etc.

Instructions for Creating a Mapped Drive in Windows 11

  1. Open File Explorer:

    • Click on the File Explorer icon on the taskbar or press Win + E to open File Explorer.

  2. Map Network Drive:

    • In File Explorer, click on "This PC" in the left pane.

    • Click on the "Computer" tab at the top, then select "Map network drive."

  3. Choose a Drive Letter:

    • In the "Drive" dropdown menu, select a drive letter to assign to the network drive.

  4. Enter Folder Path:

    • In the "Folder" field, enter the path to the shared folder on the Ceph cluster. The format should be: \\[Ceph_cluster_IP_or_hostname]\[share_name]

    • Example: \\192.168.1.100\share_name

  5. Reconnect at Sign-In:

    • Check the box for "Reconnect at sign-in" if you want the network drive to be mapped automatically each time you log in.

  6. Finish:

    • Click "Finish" to complete the process. The mapped drive should now appear in File Explorer under "This PC."

If you encounter any issues or need further assistance, please contact the IT Help Desk.

Steps to Connect to a Ceph Share on macOS

  1. Open Finder
    Click on the Finder icon in the Dock to open a Finder window.

  2. Open 'Connect to Server'
    In the Finder menu bar at the top of the screen, click Go, and then select Connect to Server from the drop-down menu.

  3. Enter Server Address
    In the Server Address field, type the network path to the Ceph share you want to connect to. Use the format:

    smb://[Ceph_cluster_IP_or_hostname]/[share_name]

    Example:

    smb://192.168.1.100/share_name

  4. Connect
    Click Connect. You will be prompted to enter your credentials.

  5. Enter Credentials
    Enter your username and password for the Ceph share and click OK. You may also check the option to save your credentials in your Keychain for easier access in the future.

  6. Select Volume
    If the server has multiple shares, you may be prompted to select the volume you wish to mount. Choose the appropriate share.

  7. Mapped Drive Appears in Finder
    Once connected, the Ceph share will appear in Finder under the Locations section. You can now access and manage files on the shared drive.

  8. Reconnect Automatically
    To ensure the drive reconnects after a reboot, drag the mounted drive icon from Finder into your Login Items (located in System Settings > Users & Groups > Login Items).

 Ceph Shares

/wiki/spaces/IKB/pages/2420015121

The College of the Environment has a few unique Ceph Shares.

Cryoshare - Data for Alia Khan

Psp_ac_map - GIS Data previously stored on physical servers

Faculty - Storage available by request for large datasets

Student - Storage available for Faculty to use for Students. This data stays for the year and then is removed. Assigning students for that are done by IT Management.
Groups can be found here: https://wwuhelp.atlassian.net/wiki/spaces/CENVIT/pages/2486042777/Active+Directory+Guide+for+CENV#StuSections

Ceph Cluster FAQ

Do I need to be onsite?

Yes, it is recommended that you be onsite for use with large files. You can use the VPN if you are not able to be onsite.

...