-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
begin reducing circular deps #2095
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
begin reducing circular deps #2095
Conversation
… allowing Seq to import this rather than inherit from Collection
no substantive changes to this point and feel free to ignore |
…of arguments keyword
Interested to try removing -export class SeqImpl extends CollectionImpl {
+export class SeqImpl {
/* ... */
+ fromEntrySeq () { return collectionFromEntrySeq(this) }
}
+CollectionProtoAssign(SeqImpl.prototype) |
|
|
In my failed attempt to do this, we noted that some few functions produce dependencies between ordered and unordered collections. |
I should probably refrain from making any more statements here as I continue to gain more understanding... I see there are two types of "dependency tree" that compete with each other; One comes from Class inheritence chains, which strongly influence the import tree to look one way. The other comes from instance methods that reference external class constructors and related utilities. My intuition is flattening the inheritence chains will relieve significant confusion. |
I don't feel (yet) that the interface needs to be changed |
thanks for the input @bdurrer |
Thank you for your effort. I do not know if you manage to do this but it's nice to see your motivation 😃 For the record, previous attempts failed because of the You can check #1961 were we discussed about it. A possible way to fix that may be to do like zod did in their latest release : two different packages, the "full" default one, and a "mini" one that removes the chaining possibility. See #1961 (comment) It makes the api more complex as we have two package to maintain, and the users have to make that choice, but it seems simple to implement, and fixes size issue for users that just want to use Be aware too that the objective of immutable v6 will be to migrate all the codebase to Typescript. If you still need to work on that, can you convert this PR to draft, and convert it back when it's ready to review ? |
I can't think of way to programmatically generate methods from functions or functions from methods in a way that is practical here. For example, take this collection const collectionFilter = (collection, predicate, context) => {
return reify(collection, filterFactory(collection, predicate, context, true));
};
const collectionInheritingObjectOrClassWithMethods = {
filter(predicate, context) {
return collectionFilter(this, predicate, context);
}
}; There are at least 8 Classes defining this 'filter' method. Even if we define them twice, it seems like there could be a better way where they are declared only one time and somehow transformed for the other use-case. Any good ideas? |
anything I try looks clumsy obj[narrow(methodfn.name)] = function () {
methodfn.apply(null, [this].concat(arguments))
} |
That's why I think that there is a |
Wow yes it's a huge loss in readability 😱 |
probably not compatible w/ typescipt and perhaps not acceptable to you but, sharing anyway just a sketch const transformToMethod = (obj, fn) => {
const methodname = fn.name
if (methodname.endsWith('_')) {
switch (fn.length) {
case 1: obj[methodname] = function (...a1) { return fn(this, a1) }; break;
case 2: obj[methodname] = function (a1, ...a2) { return fn(this, a1, a2); }; break;
case 3: obj[methodname] = function (a1, a2, ...a3) { return fn(this, a1, a2, a3); }; break;
case 4: obj[methodname] = function (a1, a2, a3, ...a4) { return fn(this, a1, a2, a3, a4); }; break;
}
} else {
switch (fn.length) {
case 0:
case 1: obj[methodname] = function () { return fn(this); }; break;
case 2: obj[methodname] = function (a1) { return fn(this, a1); }; break;
case 3: obj[methodname] = function (a1, a2) { return fn(this, a1, a2); }; break;
case 4: obj[methodname] = function (a1, a2, a3) { return fn(this, a1, a2, a3); }; break;
}
}
return obj
} The above transformation may be unconventional, but would return the necessary method expressions without downside of adding ...spread or concat to each one. Furthermore, results could be cache-constructed once only. |
And during or after that sequence, methods are "mixed in" to both
The mixins obfuscate the order of definition and results may differ in cjs vs esm, due to esm resolving modules in reverse order as cjs. Assigning all methods at once in a single location would simplify things eg, assignProto(ToKeyedSequence, [
collectionImplFunctions,
seqImplFunctions,
keyedSeqImplFunctions,
collectionImplFunctionsFromMixin,
keyedSeqImplFunctionsFromMixin
]) |
I don't have a strong feeling about how to proceed. Maybe a different approach should be used altogether. I understand better why the interface would need to change as well and not sure if that's awesome or not. |
Planning to continue developing changes here, but want to avoid sending un-wanted notifications each time something is pushed up. 'Closing. |
this patch might be useful for anyone attempting this later.; removes the dependency between merge.js and Seq.js diff --git a/src/functional/merge.js b/src/functional/merge.js
index b2d231a0..1bbe1213 100644
--- a/src/functional/merge.js
+++ b/src/functional/merge.js
@@ -1,8 +1,6 @@
import { isImmutable } from '../predicates/isImmutable';
-import { isIndexed } from '../predicates/isIndexed';
-import { isKeyed } from '../predicates/isKeyed';
import { IndexedCollection, KeyedCollection } from '../Collection';
-import { Seq } from '../Seq';
+import { IS_RECORD_SYMBOL, IS_INDEXED_SYMBOL, IS_KEYED_SYMBOL } from '../const';
import hasOwnProperty from '../utils/hasOwnProperty';
import isDataStructure from '../utils/isDataStructure';
import shallowCopy from '../utils/shallowCopy';
@@ -82,18 +80,29 @@ function deepMergerWith(merger) {
return deepMerger;
}
+const anyIsKeyed = (any) => {
+ return Boolean(
+ any &&
+ typeof any === 'object' &&
+ !Array.isArray(any) &&
+ (!isImmutable(any) || any[IS_KEYED_SYMBOL] || any[IS_RECORD_SYMBOL])
+ );
+};
+
+const anyIsIndexed = (any) => {
+ return Boolean(any && (any[IS_INDEXED_SYMBOL] || Array.isArray(any)));
+};
+
/**
* It's unclear what the desired behavior is for merging two collections that
* fall into separate categories between keyed, indexed, or set-like, so we only
* consider them mergeable if they fall into the same category.
*/
function areMergeable(oldDataStructure, newDataStructure) {
- const oldSeq = Seq(oldDataStructure);
- const newSeq = Seq(newDataStructure);
// This logic assumes that a sequence can only fall into one of the three
// categories mentioned above (since there's no `isSetLike()` method).
return (
- isIndexed(oldSeq) === isIndexed(newSeq) &&
- isKeyed(oldSeq) === isKeyed(newSeq)
+ anyIsIndexed(oldDataStructure) === anyIsIndexed(newDataStructure) &&
+ anyIsKeyed(oldDataStructure) === anyIsKeyed(newDataStructure)
);
} |
diff --git a/src/methods/merge.js b/src/methods/merge.js
index c89dad4c..c29da373 100644
--- a/src/methods/merge.js
+++ b/src/methods/merge.js
@@ -2,6 +2,10 @@ import { KeyedCollection } from '../Collection';
import { NOT_SET } from '../TrieUtils';
import { update } from '../functional/update';
+const collectionPlainSize = (cx) => {
+ return cx._values.size
+}
+
export function merge(...iters) {
return mergeIntoKeyedWith(this, iters);
}
@@ -25,7 +29,7 @@ function mergeIntoKeyedWith(collection, collections, merger) {
return collection;
}
if (
- collection.toSeq().size === 0 &&
+ collectionPlainSize(collection) === 0 &&
!collection.__ownerID &&
iters.length === 1
) { |
An experimental module is working locally here. It is made from lower-level node-mapiluations found in Map.js and has no dependence on the other data types. Node.jsimport { hash } from './Hash';
import { probeIsSame } from './probe';
import {
utilArrCopy,
utilArrSetAt,
utilArrSpliceIn,
utilArrSpliceOut,
} from './util';
import transformToMethods from './utils/transformToMethods';
import { SHIFT, SIZE, MASK, OwnerID, SetRef } from './TrieUtils';
import {
NOT_SET,
SHAPE_NODEVALUE,
SHAPE_NODEARRAYMAP,
SHAPE_NODEHASHARRAYMAP,
SHAPE_NODEHASHCOLLISION,
SHAPE_NODEBITMAPINDEXED
} from './const';
const MAX_ARRAY_MAP_SIZE = SIZE / 4;
const MAX_BITMAP_INDEXED_SIZE = SIZE / 2;
const MIN_HASH_ARRAY_MAP_SIZE = SIZE / 4;
const nodeHashArrayMapGet = (nham, shift, keyHash, key, notSetValue) => {
if (keyHash === undefined) {
keyHash = hash(key);
}
const idx = (shift === 0 ? keyHash : keyHash >>> shift) & MASK;
const node = nham.nodes[idx];
return node
? node.get(shift + SHIFT, keyHash, key, notSetValue)
: notSetValue;
};
const nodeHashArrayMapUpdate = (
nham,
ownerID,
shift,
keyHash,
key,
value,
didChangeSize,
didAlter
) => {
if (keyHash === undefined) {
keyHash = hash(key);
}
const idx = (shift === 0 ? keyHash : keyHash >>> shift) & MASK;
const removed = value === NOT_SET;
const nodes = nham.nodes;
const node = nodes[idx];
if (removed && !node) {
return nham;
}
const newNode = nodeUpdate(
node,
ownerID,
shift + SHIFT,
keyHash,
key,
value,
didChangeSize,
didAlter
);
if (newNode === node) {
return nham;
}
let newCount = nham.count;
if (!node) {
newCount++;
} else if (!newNode) {
newCount--;
if (newCount < MIN_HASH_ARRAY_MAP_SIZE) {
return nodesPack(ownerID, nodes, newCount, idx);
}
}
const isEditable = ownerID && ownerID === nham.ownerID;
const newNodes = utilArrSetAt(nodes, idx, newNode, isEditable);
if (isEditable) {
nham.count = newCount;
nham.nodes = newNodes;
return nham;
}
return nodeHashArrayMap(ownerID, newCount, newNodes);
};
const nodeHashCollisionUpdate = (
nhc,
ownerID,
shift,
keyHash,
key,
value,
didChangeSize,
didAlter
) => {
if (keyHash === undefined) {
keyHash = hash(key);
}
const removed = value === NOT_SET;
if (keyHash !== nhc.keyHash) {
if (removed) {
return nhc;
}
SetRef(didAlter);
SetRef(didChangeSize);
return nodeMergeInto(nhc, ownerID, shift, keyHash, [key, value]);
}
const entries = nhc.entries;
let idx = 0;
const len = entries.length;
for (; idx < len; idx++) {
if (probeIsSame(key, entries[idx][0])) {
break;
}
}
const exists = idx < len;
if (exists ? entries[idx][1] === value : removed) {
return nhc;
}
SetRef(didAlter);
// eslint-disable-next-line @typescript-eslint/no-unused-expressions -- TODO enable eslint here
(removed || !exists) && SetRef(didChangeSize);
if (removed && len === 2) {
return nodeValue(ownerID, nhc.keyHash, entries[idx ^ 1]);
}
const isEditable = ownerID && ownerID === nhc.ownerID;
const newEntries = isEditable ? entries : utilArrCopy(entries);
if (exists) {
if (removed) {
// eslint-disable-next-line @typescript-eslint/no-unused-expressions -- TODO enable eslint here
idx === len - 1 ? newEntries.pop() : (newEntries[idx] = newEntries.pop());
} else {
newEntries[idx] = [key, value];
}
} else {
newEntries.push([key, value]);
}
if (isEditable) {
nhc.entries = newEntries;
return nhc;
}
return nodeHashCollision(ownerID, nhc.keyHash, newEntries);
};
const nodeHashCollision = (cache => (ownerID, keyHash, entries) => {
const nhc = Object.create(cache || (cache = transformToMethods({
shape: SHAPE_NODEHASHCOLLISION,
get: nodeEntryGet,
iterate: nodeEntriesIterate,
update: nodeHashCollisionUpdate,
})));
nhc.ownerID = ownerID;
nhc.keyHash = keyHash;
nhc.entries = entries;
return nhc
})();
const nodesExpand = (ownerID, nodes, bitmap, including, node) => {
let count = 0;
const expandedNodes = new Array(SIZE);
for (let ii = 0; bitmap !== 0; ii++, bitmap >>>= 1) {
expandedNodes[ii] = bitmap & 1 ? nodes[count++] : undefined;
}
expandedNodes[including] = node;
return nodeHashArrayMap(ownerID, count + 1, expandedNodes);
};
const nodeMergeInto = (node, ownerID, shift, keyHash, entry) => {
if (node.keyHash === keyHash) {
return nodeHashCollision(ownerID, keyHash, [node.entry, entry]);
}
const idx1 = (shift === 0 ? node.keyHash : node.keyHash >>> shift) & MASK;
const idx2 = (shift === 0 ? keyHash : keyHash >>> shift) & MASK;
let newNode;
const nodes =
idx1 === idx2
? [nodeMergeInto(node, ownerID, shift + SHIFT, keyHash, entry)]
: ((newNode = nodeValue(ownerID, keyHash, entry)),
idx1 < idx2 ? [node, newNode] : [newNode, node]);
return nodeBitmapIndexed(ownerID, (1 << idx1) | (1 << idx2), nodes);
};
const nodeValueGet = (nv, shift, keyHash, key, notSetValue) => {
return probeIsSame(key, nv.entry[0]) ? nv.entry[1] : notSetValue;
};
const nodeValueUpdate = (
nv,
ownerID,
shift,
keyHash,
key,
value,
didChangeSize,
didAlter
) => {
const removed = value === NOT_SET;
const keyMatch = probeIsSame(key, nv.entry[0]);
if (keyMatch ? value === nv.entry[1] : removed) {
return nv;
}
SetRef(didAlter);
if (removed) {
SetRef(didChangeSize);
return; // undefined
}
if (keyMatch) {
if (ownerID && ownerID === nv.ownerID) {
nv.entry[1] = value;
return nv;
}
return nodeValue(ownerID, nv.keyHash, [key, value]);
}
SetRef(didChangeSize);
return nodeMergeInto(nv, ownerID, shift, hash(key), [key, value]);
};
const nodeValueIterate = (nv, fn) => {
return fn(nv.entry);
};
const nodeValue = (cache => (ownerID, keyHash, entry) => {
const nv = Object.create(cache || (cache = transformToMethods({
shape: SHAPE_NODEVALUE,
get: nodeValueGet,
iterate: nodeValueIterate,
update: nodeValueUpdate,
})));
nv.ownerID = ownerID;
nv.keyHash = keyHash;
nv.entry = entry;
return nv
})();
const nodeIsLeaf = (node) => {
return (
node.shape === SHAPE_NODEVALUE || node.shape === SHAPE_NODEHASHCOLLISION
);
};
const nodesPack = (ownerID, nodes, count, excluding) => {
let bitmap = 0;
let packedII = 0;
const packedNodes = new Array(count);
for (let ii = 0, bit = 1, len = nodes.length; ii < len; ii++, bit <<= 1) {
const node = nodes[ii];
if (node !== undefined && ii !== excluding) {
bitmap |= bit;
packedNodes[packedII++] = node;
}
}
return nodeBitmapIndexed(ownerID, bitmap, packedNodes);
};
const nodeUpdate = (
node,
ownerID,
shift,
keyHash,
key,
value,
didChangeSize,
didAlter
) => {
if (!node) {
if (value === NOT_SET) {
return node;
}
SetRef(didAlter);
SetRef(didChangeSize);
return nodeValue(ownerID, keyHash, [key, value]);
}
return node.update(
ownerID,
shift,
keyHash,
key,
value,
didChangeSize,
didAlter
);
};
const popCount = (x) => {
x -= (x >> 1) & 0x55555555;
x = (x & 0x33333333) + ((x >> 2) & 0x33333333);
x = (x + (x >> 4)) & 0x0f0f0f0f;
x += x >> 8;
x += x >> 16;
return x & 0x7f;
};
const nodeBitmapIndexedGet = (nbi, shift, keyHash, key, notSetValue) => {
if (keyHash === undefined) {
keyHash = hash(key);
}
const bit = 1 << ((shift === 0 ? keyHash : keyHash >>> shift) & MASK);
const bitmap = nbi.bitmap;
return (bitmap & bit) === 0
? notSetValue
: nbi.nodes[popCount(bitmap & (bit - 1))].get(
shift + SHIFT,
keyHash,
key,
notSetValue
);
};
const nodeBitmapIndexedUpdate = (
nbi,
ownerID,
shift,
keyHash,
key,
value,
didChangeSize,
didAlter
) => {
if (keyHash === undefined) {
keyHash = hash(key);
}
const keyHashFrag = (shift === 0 ? keyHash : keyHash >>> shift) & MASK;
const bit = 1 << keyHashFrag;
const bitmap = nbi.bitmap;
const exists = (bitmap & bit) !== 0;
if (!exists && value === NOT_SET) {
return nbi;
}
const idx = popCount(bitmap & (bit - 1));
const nodes = nbi.nodes;
const node = exists ? nodes[idx] : undefined;
const newNode = nodeUpdate(
node,
ownerID,
shift + SHIFT,
keyHash,
key,
value,
didChangeSize,
didAlter
);
if (newNode === node) {
return nbi;
}
if (!exists && newNode && nodes.length >= MAX_BITMAP_INDEXED_SIZE) {
return nodesExpand(ownerID, nodes, bitmap, keyHashFrag, newNode);
}
if (exists && !newNode && nodes.length === 2 && nodeIsLeaf(nodes[idx ^ 1])) {
return nodes[idx ^ 1];
}
if (exists && newNode && nodes.length === 1 && nodeIsLeaf(newNode)) {
return newNode;
}
const isEditable = ownerID && ownerID === nbi.ownerID;
const newBitmap = exists ? (newNode ? bitmap : bitmap ^ bit) : bitmap | bit;
const newNodes = exists
? newNode
? utilArrSetAt(nodes, idx, newNode, isEditable)
: utilArrSpliceOut(nodes, idx, isEditable)
: utilArrSpliceIn(nodes, idx, newNode, isEditable);
if (isEditable) {
nbi.bitmap = newBitmap;
nbi.nodes = newNodes;
return nbi;
}
return nodeBitmapIndexed(ownerID, newBitmap, newNodes);
};
const nodeBitmapIndexed = (cache => (ownerID, bitmap, nodes) => {
const nbi = Object.create(cache || (cache = transformToMethods({
shape: SHAPE_NODEBITMAPINDEXED,
get: nodeBitmapIndexedGet,
iterate: nodesIterate,
update: nodeBitmapIndexedUpdate,
})));
nbi.ownerID = ownerID;
nbi.bitmap = bitmap;
nbi.nodes = nodes;
return nbi
})();
const nodeEntriesIterate = (node, fn, reverse) => {
const entries = node.entries;
for (let ii = 0, maxIndex = entries.length - 1; ii <= maxIndex; ii++) {
if (fn(entries[reverse ? maxIndex - ii : ii]) === false) {
return false;
}
}
};
const nodeEntryGet = (nam, shift, keyHash, key, notSetValue) => {
const entries = nam.entries;
for (let ii = 0, len = entries.length; ii < len; ii++) {
if (probeIsSame(key, entries[ii][0])) {
return entries[ii][1];
}
}
return notSetValue;
};
const nodeArrayMapUpdate = (
nam,
ownerID,
shift,
keyHash,
key,
value,
didChangeSize,
didAlter
) => {
const removed = value === NOT_SET;
const entries = nam.entries;
let idx = 0;
const len = entries.length;
for (; idx < len; idx++) {
if (probeIsSame(key, entries[idx][0])) {
break;
}
}
const exists = idx < len;
if (exists ? entries[idx][1] === value : removed) {
return nam;
}
SetRef(didAlter);
// eslint-disable-next-line @typescript-eslint/no-unused-expressions -- TODO enable eslint here
(removed || !exists) && SetRef(didChangeSize);
if (removed && entries.length === 1) {
return; // undefined
}
if (!exists && !removed && entries.length >= MAX_ARRAY_MAP_SIZE) {
return nodesCreate(ownerID, entries, key, value);
}
const isEditable = ownerID && ownerID === nam.ownerID;
const newEntries = isEditable ? entries : utilArrCopy(entries);
if (exists) {
if (removed) {
// eslint-disable-next-line @typescript-eslint/no-unused-expressions -- TODO enable eslint here
idx === len - 1 ? newEntries.pop() : (newEntries[idx] = newEntries.pop());
} else {
newEntries[idx] = [key, value];
}
} else {
newEntries.push([key, value]);
}
if (isEditable) {
nam.entries = newEntries;
return nam;
}
return nodeArrayMap(ownerID, newEntries);
};
const nodeArrayMap = (cache => (ownerID, entries) => {
const nam = Object.create(cache || (cache = transformToMethods({
shape: SHAPE_NODEARRAYMAP,
get: nodeEntryGet,
iterate: nodeEntriesIterate,
update: nodeArrayMapUpdate,
})));
nam.ownerID = ownerID;
nam.entries = entries;
return nam
})();
const nodesCreate = (ownerID, entries, key, value) => {
if (!ownerID) {
ownerID = new OwnerID();
}
let node = nodeValue(ownerID, hash(key), [key, value]);
for (let ii = 0; ii < entries.length; ii++) {
const entry = entries[ii];
node = node.update(ownerID, 0, undefined, entry[0], entry[1]);
}
return node;
};
const nodesIterate = (collection, fn, reverse) => {
const nodes = collection.nodes;
for (let ii = 0, maxIndex = nodes.length - 1; ii <= maxIndex; ii++) {
const node = nodes[reverse ? maxIndex - ii : ii];
if (node && node.iterate(fn, reverse) === false) {
return false;
}
}
};
const nodeHashArrayMap = (cache => (ownerID, count, nodes) => {
const nham = Object.create(cache || (cache = transformToMethods({
shape: SHAPE_NODEHASHARRAYMAP,
iterate: nodesIterate,
get: nodeHashArrayMapGet,
update: nodeHashArrayMapUpdate,
})));
nham.ownerID = ownerID;
nham.count = count;
nham.nodes = nodes;
return nham
})();
export {
nodeValue,
nodeValueIterate,
nodeArrayMap,
nodeArrayMapUpdate,
nodeHashArrayMap,
nodeHashArrayMapGet,
nodeHashArrayMapUpdate,
nodeHashCollision,
nodeHashCollisionUpdate,
nodeBitmapIndexed,
nodeBitmapIndexedGet,
nodeBitmapIndexedUpdate,
nodesIterate,
nodesExpand,
nodesPack,
nodesCreate,
nodeEntryGet,
nodeEntriesIterate,
nodeMergeInto,
nodeIsLeaf,
nodeUpdate,
}; From this file, I experimented with dynamic-method definitions and am using this pattern, const nodeHashArrayMap = (cache => (ownerID, count, nodes) => {
const nham = Object.create(cache || (cache = transformToMethods({
shape: SHAPE_NODEHASHARRAYMAP,
iterate: nodesIterate,
get: nodeHashArrayMapGet,
update: nodeHashArrayMapUpdate,
})));
nham.ownerID = ownerID;
nham.count = count;
nham.nodes = nodes;
return nham
})();
All tests are passing here ~5-6 seconds, same as main branch Probably going to break for at least a few days. |
A likely performance improvement would be to replace arrCopy.ts' for loop with just |
That seems counterintuitive 🤔 |
really? please explain :), I don't understand |
My bad, I though you wanted to slice on each iteration.
|
I opened #2098 to address that in 6.0 |
If the interface would change this way, I wish 'to' had been previously defined with tests. If "fromJS" and other behaviours were already migrated from toXxx in advance it would be nice. -collection.toMap()
+collection.to(Map) |
I might succeed in removing circular deps here and, if so, it might be smoother to introduce such changes in a subsequent 7.x release. Due to high-difficulty, changes are non-trivial and would become the subject of discussions. In the 6.x branch, it would be nice to have a 'to(Xxx)' behaviour with tests; with result, a 7.x branch could continue verifying 'to(Xxx)' while removing 'toXxx()'. Politely, if you added that, it would be great. I did not preserve inline-typescript-things from ts files. The reason was, I'm using a fresh desktop without typescript-setup and I wanted to just experiment first. If/when I succeed and, if you then basically like the changes, I can help with re-adding the types or could PR using a different branch that strategically-applies commits and preserves types. Also, this patch should enable import file-extensions (if that suits you) -import { Seq, KeyedSeq, IndexedSeq, SetSeq } from './Seq';
+import { Seq, KeyedSeq, IndexedSeq, SetSeq } from './Seq.js'; eslint.config.mjs (patch)diff --git a/eslint.config.mjs b/eslint.config.mjs
index 5f9dcd48..48e5804b 100644
--- a/eslint.config.mjs
+++ b/eslint.config.mjs
@@ -62,6 +62,14 @@ export default tseslint.config(
files: ['src/*'],
rules: {
'no-console': 'error',
+ "import/no-unresolved": "off",
+ 'import/extensions': ['error', 'always', { js: 'always', ts: 'always' }],
+ },
+ settings: {
+ "import/no-unresolved": "off",
+ 'import/resolver': {
+ typescript: {},
+ },
},
}, |
It would be nice if Technical implementation would not be challenging, but designing the interface for it would be challenging. |
Having spent more time with this... probably the initial PR should remove circular dependencies and keep all existing tests and interfaces that convert to/from different types of data. Probably no new messages from me for a week or so. |
An acyclic import tree here passes many tests. Not finished and still many things to do. Basically the way its working is, collections share a basic collection definition like this, const collection = { ...collectionBasicDefinitions }
const collectionIndexed = () => cache || (
cache = Object.assign(Object.create(collection), { ...collectionIndexedDefinitions })) When "Map" or other collection types are imported from a "Universe" module, that module assigns interdependent definitions to the "basic", "keyed", "indexed" and other definitions Object.assign(collection, { ...interdependentMethods })
Object.assign(collectionIndexed(), { ...interdependentMethodsIndexed }) Most of the "factory" functions rely on dynamically-resolved constructors so list-control behaviours using the factory functions have cross-dependent relationships. To quickly get around this now, they and their callers are updated to use additional params with whatever is needed and sometimes this is ugly, sort: function (comparator) {
return collectionXOpSort(this, SeqKeyed, SeqIndexed, SeqSet, comparator)
},
concat: function (...values) {
return collectionXOpConcat(
this,
SeqKeyed,
seqKeyedSeqFromValue,
seqIndexedSeqFromValue,
values)
}, Names of functions that are cross-dependent are prefixed "collectionX" to be easily found and maybe factored away later. Based on experiments here, factory functions for indexed collections usually need indexed-related things, keyed collections keyed-related things and so on, so different collection-types could use collection-specific list-functions to reduce cross-dependency. Probably no new messages again for another week or so. |
Personally, I would hate this as user. Either the user would have to list all data types of that structure, or there are dynamic imports. These methods convert data deeply, you have no way to know what data is in there. You could make it mandatory to pass in the reviver, which does the actual work. The default one doesn't even use ordered types. But as a dev of an existing project, I would not like that much either (lot of work, zero gain). PS: It's cool that you keep working on this, I am looking forward to see the path and what the end result is :) |
Thanks for sharing your position. It is good this PR won't change the interface in any way. To remove circular imports, it become necessary to make vast changes to the way immutable is composed. Attempts to use smaller changes did not work out, but flow and pieces of functionality are unchanged. |
ugh TypeError [ERR_INVALID_ARG_TYPE]: The "chunk" argument must be of type string or an instance of Buffer, TypedArray, or DataView. Received undefined
at _write (node:internal/streams/writable:482:13)
at Writable.write (node:internal/streams/writable:510:10)
at Suite.onStart (/home/bumble/soft/immutable-js/resources/benchmark.js:151:26)
at /home/bumble/soft/immutable-js/node_modules/benchmark/benchmark.js:1215:40
at arrayEach (/home/bumble/soft/immutable-js/node_modules/lodash/lodash.js:530:11)
at Function.forEach (/home/bumble/soft/immutable-js/node_modules/lodash/lodash.js:9410:14)
at Suite.emit (/home/bumble/soft/immutable-js/node_modules/benchmark/benchmark.js:1214:11)
at Suite.onStart (/home/bumble/soft/immutable-js/node_modules/benchmark/benchmark.js:1174:17)
at invoke (/home/bumble/soft/immutable-js/node_modules/benchmark/benchmark.js:954:25)
at Suite.runSuite [as run] (/home/bumble/soft/immutable-js/node_modules/benchmark/benchmark.js:1169:7) |
maybe it is unmaintained and not working w/ latest versions of node |
all tests are passing and would like to verify anything possible before sharing |
first commit does this: import
CollectionProtoAssign
function; reduce Seq->Collection->Seq by allowing Seq to import this rather than inherit from Collection