You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Unify API to load data ([Point3DRDD](https://github.com/astrolabsoftware/spark3D/pull/63), [SphereRDD](https://github.com/astrolabsoftware/spark3D/pull/64))
5
+
- Speed-up cross-match methods by using native Scala methods ([Scala](https://github.com/astrolabsoftware/spark3D/pull/58))
6
+
- Add a new website + spark3D belongs to AstroLab Software ([website](https://astrolabsoftware.github.io/))
Copy file name to clipboardExpand all lines: README.md
+5Lines changed: 5 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,6 +9,7 @@
9
9
-[05/2018]**GSoC 2018**: spark3D has been selected to the Google Summer of Code (GSoC) 2018. Congratulation to [@mayurdb](https://github.com/mayurdb) who will work on the project this year!
10
10
-[06/2018]**Release**: version 0.1.0, 0.1.1
11
11
-[07/2018]**New location**: spark3D is an official project of [AstroLab Software](https://astrolabsoftware.github.io/)!
12
+
-[07/2018]**Release**: version 0.1.3
12
13
13
14
## Installation and tutorials
14
15
@@ -21,3 +22,7 @@ See our amazing [website](https://astrolabsoftware.github.io/spark3D/)!
21
22
* Mayur Bhosale (mayurdb31 at gmail.com) -- GSoC 2018.
22
23
23
24
Contributing to spark3D: see [CONTRIBUTING](https://github.com/astrolabsoftware/spark3D/blob/master/CONTRIBUTING.md).
Note that if you make a fat jar (that is building with `sbt assembly` and not `sbt package`), you do not need to specify external dependencies as they are already included in the resulting jar:
@@ -68,7 +68,7 @@ In this tutorial we will review the steps to simply create RDD from 3D data sets
68
68
A point is an object with 3 spatial coordinates. In spark3D, you can choose the coordinate system between cartesian `(x, y, z)` and spherical `(r, theta, phi)`. Let's suppose we have a text file (CSV, JSON, or TXT) whose columns are labeled `x`, `y` and `z`, the cartesian coordinates of points:
// We assume hdu#1 of filename contains at least 3 columns whose names are `colnames`
85
85
// Order of columns in the file does not matter, as they will be re-aranged
@@ -96,7 +96,7 @@ A sphere is defined by its center (3 spatial coordinates) plus a radius.
96
96
In spark3D, you can choose the coordinate system of the center between cartesian `(x, y, z)` and spherical `(r, theta, phi)`. Let's suppose we have a text file (CSV, JSON, or TXT) whose columns are labeled `r`, `theta`, `phi`, the spherical coordinates and `radius`:
Copy file name to clipboardExpand all lines: docs/04_query.md
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,9 +14,9 @@ The spark3D library contains a number of methods and tools to manipulate 3D RDD.
14
14
A Envelope query takes as input a `RDD[Shape3D]` and an envelope, and returns all objects in the RDD intersecting the envelope (contained in and crossing the envelope):
A cross-match takes as input two data sets, and return objects matching based on the center distance, or pixel index of objects. Note that performing a cross-match between a data set of N elements and another of M elements is a priori a NxM operation - so it can be very costly! Let's load two `Point3D` data sets:
@@ -77,8 +77,8 @@ By default, the two sets are partitioned randomly (in the sense points spatially
77
77
In order to decrease the cost of performing the cross-match, you need to partition the two data sets the same way. By doing so, you will cross-match only points belonging to the same partition. For a large number of partitions, you will decrease significantly the cost:
0 commit comments