Skip to content

Conversation

@adity1raut
Copy link
Contributor

@adity1raut adity1raut commented Jan 20, 2026

Ⅰ. Describe what this PR does

  • test(referencedataset): add unit tests for ConfigMap and DaemonSet operations

Ⅱ. Does this pull request fix one issue?

Part Of #5407

Ⅲ. List the added test cases (unit test/integration test) if any, please explain if no tests are needed.

Ⅳ. Describe how to verify it

Ⅴ. Special notes for reviews

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @adity1raut, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the test coverage for the referencedataset package by introducing new unit tests for its ConfigMap and DaemonSet operations. These tests are crucial for validating the robustness and correctness of how Fluid handles the replication and management of Kubernetes resources like Fuse DaemonSets and configuration ConfigMaps across different data runtime environments, ensuring reliable data orchestration.

Highlights

  • Unit Tests for DaemonSet Operations: Added comprehensive unit tests for the copyFuseDaemonSetForRefDataset function, ensuring correct DaemonSet replication, proper handling of node selectors for various runtime types (Alluxio, Jindo), and graceful management of existing DaemonSets and missing source DaemonSets.
  • Unit Tests for ConfigMap Operations: Introduced unit tests for the createConfigMapForRefDataset function, validating ConfigMap creation and copying logic across different Fluid runtimes (Alluxio, JuiceFS, GooseFS, Jindo). These tests also confirm correct skipping behavior for EFCRuntime and ThinRuntime, and robust error handling for unsupported runtime types.
  • New Test Suite Setup: Established a new test suite for the referencedataset package by adding referencedataset_suite_test.go, integrating Ginkgo and Gomega for structured and behavior-driven testing, thereby improving test coverage and maintainability.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@fluid-e2e-bot
Copy link

fluid-e2e-bot bot commented Jan 20, 2026

Hi @adity1raut. Thanks for your PR.

I'm waiting for a fluid-cloudnative member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds unit tests for ConfigMap and DaemonSet operations related to referencedataset. The tests cover various scenarios, including different runtime types, existing resources, and missing source resources, which is great.

My review focuses on improving the test code's maintainability by reducing code duplication. I've suggested refactoring the repeated creation of refDataset objects into the BeforeEach blocks. This will not only make the tests cleaner but also fix inconsistencies where some test cases were using incomplete refDataset objects.

Comment on lines +74 to +84
refDataset := &datav1alpha1.Dataset{
ObjectMeta: metav1.ObjectMeta{
Name: "ref-dataset",
Namespace: "ref-ns",
UID: types.UID("test-uid"),
},
TypeMeta: metav1.TypeMeta{
APIVersion: "data.fluid.io/v1alpha1",
Kind: "Dataset",
},
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The refDataset object is repeatedly defined across multiple test cases within this Describe block. To improve code maintainability and reduce duplication, consider defining it once in a var block at the Describe scope and initializing it within the BeforeEach function.

This would also help resolve an inconsistency in the test "when source daemonset does not exist", where the refDataset is missing the TypeMeta field, which is used by the function under test to create an OwnerReference.

Example refactoring:

var _ = Describe("ConfigMap Operations", func() {
	Describe("copyFuseDaemonSetForRefDataset", func() {
		var (
			testScheme *runtime.Scheme
			testObjs   []runtime.Object
			fakeClient client.Client
			refDataset *datav1alpha1.Dataset
		)

		BeforeEach(func() {
			testScheme = runtime.NewScheme()
			// ...
			testObjs = []runtime.Object{}
			refDataset = &datav1alpha1.Dataset{
				ObjectMeta: metav1.ObjectMeta{
					Name:      "ref-dataset",
					Namespace: "ref-ns",
					UID:       types.UID("test-uid"),
				},
				TypeMeta: metav1.TypeMeta{
					APIVersion: "data.fluid.io/v1alpha1",
					Kind:       "Dataset",
				},
			}
		})

		// ... test cases using refDataset
	})
//...

@codecov
Copy link

codecov bot commented Jan 20, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 56.22%. Comparing base (7b2a096) to head (2ce10df).
⚠️ Report is 6 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #5419      +/-   ##
==========================================
- Coverage   57.46%   56.22%   -1.24%     
==========================================
  Files         443      443              
  Lines       30735    30735              
==========================================
- Hits        17661    17282     -379     
- Misses      11460    11886     +426     
+ Partials     1614     1567      -47     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@adity1raut
Copy link
Contributor Author

image

PTAL @cheyang @RongGu

Signed-off-by: adity1raut <[email protected]>
@adity1raut adity1raut requested a review from cheyang January 21, 2026 02:39
@sonarqubecloud
Copy link

Copy link
Collaborator

@cheyang cheyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
/approve

@fluid-e2e-bot
Copy link

fluid-e2e-bot bot commented Jan 21, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: cheyang

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants