PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Monday the 12th of January, 2015

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[01:42:02] <anybroad> hi
[01:42:07] <anybroad> So I know how to use sort, etc.
[01:42:25] <anybroad> Well, now there are some items in a collection which should be sorted next to each other.
[01:42:45] <anybroad> I use the reference-number of item 1 to indicate it should be sorted right next to item 1.
[01:42:54] <anybroad> Now how can I achieve this kind of sorting using mongo db sort?
[01:51:43] <drags> is there anyway to run or simulate 'db.<collection>.getIndexKeys()' in mongo javascript?
[01:52:00] <cheeser> you mean the shell?
[01:52:25] <drags> not in the interactive shell, but running a js file: mongo my_script.js
[01:52:44] <cheeser> well, they're functionally the same at that point
[01:52:47] <drags> I'm setting up sharding and I need to enumerate all of my collection index keys so I can plan my shard keys
[01:53:31] <drags> cheeser: how can I concat my collection names (read into an array using db.getCollectionNames()) into the db.<collection>.getIndexKeys() call?
[01:54:22] <cheeser> var names = db.getCollectionNames()
[01:54:26] <cheeser> it's just javascript
[01:54:37] <drags> right, I have the names in an array
[01:54:52] <drags> how do I build the 'db.<collection>.getIndexKeys()' lines?
[01:54:59] <drags> using the members of that array
[01:55:10] <cheeser> iterate over the names, calling db.getCollection(name).getIndexKeys()
[01:55:15] <drags> ahh
[01:55:45] <drags> awesome, thank you
[02:06:38] <cheeser> np
[05:55:43] <anybroad> hi
[05:55:50] <anybroad> Do I still need this?: http://stackoverflow.com/questions/11303294/querying-after-populate-in-mongoose
[05:55:58] <anybroad> or can I now use where after populate?
[07:01:07] <Guest45212> locks question
[07:37:22] <ubungu> hi
[08:32:24] <Folkol> Hello, mongodb is logging (what seems to be?) all queries - filling up my disk. The db.getProfilingLevel() is set to "0". How can I disable this logging?
[08:33:23] <kali> Folkol: 0 is "no profiling"
[08:33:30] <kali> so it should not be this
[08:33:39] <kali> check that it is 0 in all the databases
[08:33:42] <Folkol> Yes, that is why I came here seeking help :)
[08:33:53] <Folkol> Good point, thanks.
[08:34:46] <Folkol> Yes, it is set to 0 for all databases.
[08:35:00] <Folkol> The one that I am using for my app, and "local" / "test", whatever that is.
[08:35:40] <Folkol> I have read that it always logs slow queries, but I do not know how to distinguish "normal" profiling logs from "logs due to slow queries".
[08:36:40] <kali> well, to be honest, i never use the "0" setting
[08:36:56] <kali> so i can't say it works, but it has been around for so long...
[09:15:09] <Folkol> Ok
[09:30:20] <queretaro> Hi - In http://docs.mongodb.org/manual/core/backups/ it says that mongodump is not ideal for backing up large systems, might anyone provide a little bit more of an explanation regarding this?
[09:31:16] <kali> well, mongodump means that all the data has to flow through one mongos -> mongobackup connection, so it has a finite bandwidth
[09:31:34] <kali> you can in a situation where you need more than say, 24h to get all the data through
[09:31:39] <kali> +get
[09:36:36] <queretaro> kali: oh I see, makes sense thanks
[09:49:28] <Spea> Hey there. I want to create a new index in the background on one of my collections. The documentation states that with 2.4 indexes on secondaries will always be built in the foreground. Does that mean, that when the index creation on the primary finishes, all secondaries will block the collection?
[09:51:20] <kali> yes.
[09:51:27] <kali> it's a PITA.
[09:51:56] <Spea> oh damn :(
[09:51:57] <kali> 2.6 solves it, AFAIK. and there is a "procedure" to avoid that (basically, you build the index on offline secondaries)
[09:52:51] <Spea> i wonder what is less pain now, upgrading to 2.6 or the procedure you have mentioned :)
[09:53:27] <lqez> kali: you're right. http://docs.mongodb.org/manual/core/index-creation/#building-indexes-on-secondaries
[09:53:35] <lqez> It says “Changed in version 2.6: Secondary members can now build indexes in the background. Previously all index builds on secondaries were in the foreground.”
[09:53:52] <Spea> kali, lqez: thx
[09:54:37] <lqez> (Actually, I was stuck on simliar problem at 1.6 lol http://stackoverflow.com/questions/11649767/mongodb-copydatabase-runs-index-creation-on-foreground-not-background)
[09:55:30] <kali> wow 1.6, you're an oldtimer too :)
[09:55:56] <lqez> I'm still using 2.4 in production.. waiting for 2.8
[09:57:19] <lqez> when does 2.8 (non-rc) come out?
[09:57:34] <Spea> does anybody have a clue how long it would take to create an index on a collection with 17M rows?
[09:57:36] <kali> when it's ready :)
[09:58:22] <lqez> it may up to size of data.
[09:58:41] <kali> Spea: it depends a lot... 30 minutes
[09:58:44] <kali> maybe
[09:58:54] <kali> but if it's 3 hours, don't go and try to sue me
[09:58:54] <lqez> and hardware spec :)
[09:59:23] <Spea> hm, even 3 hours would be ok for me :)
[09:59:48] <Spea> thx, i will give it a try
[10:00:18] <kali> Spea: seriously, just go through the offline secondaries procedure
[10:00:28] <kali> Spea: it's a bit time consuming, but relatively error proof
[10:00:43] <lqez> generally speaking, 17M is not a large collection.
[10:00:45] <Spea> yeah, i will. much less work to do for me :)
[10:00:54] <lqez> may the index be with you
[10:00:58] <Spea> <3 :D
[10:05:29] <Folkol> kali (and others who might be interested): The log-spam was due to slow queries, it seems like mongod is logging slow queries despite profilingLevel being set to 0. I added an index and the log went silent.
[10:06:21] <kali> well, that's certainly the best fix
[10:26:14] <Folkol> http://docs.mongodb.org/manual/reference/configuration-options/#operationProfiling.slowOpThresholdMs
[10:40:14] <giuseppesolinas> hello
[10:40:26] <giuseppesolinas> what is the best way to install mongo on a server?
[10:41:47] <kali> giuseppesolinas: official packages from mongodb.com
[10:42:51] <giuseppesolinas> kali, does that include package management systems?
[10:43:09] <giuseppesolinas> I'm talking about the installation itself, not the sources
[10:43:31] <lqez> if you just use (not hack) mongodb itself, visit http://docs.mongodb.org/manual/installation/
[10:44:06] <lqez> (And also when you don't latest features on unstable versions)
[10:44:06] <kali> giuseppesolinas: mongodb provides binary packages for the most common platforms
[10:44:15] <giuseppesolinas> I'm just curious whether if there is a reccomended way
[10:44:32] <giuseppesolinas> or I can just install via yum and similar
[10:44:57] <lqez> I generally recommend starting with standard and simpler way.
[10:45:10] <lqez> via package manager
[10:46:04] <giuseppesolinas> thank you
[11:03:28] <giuseppesolinas> I'm getting Could not stat /dev/xdf --- No such file or directory when trying to install on EC2, what are the elegible partitions that I can use for mongo as in http://docs.mongodb.org/ecosystem/platforms/amazon-ec2/#deploy-mongodb-ec2 ?
[11:12:51] <lqez> giuseppesolinas: ‘Individual PIOPS EBS volumes for data (1000 IOPS), journal (250 IOPS), and log (100 IOPS)’
[11:13:24] <lqez> You have to mount IOPS EBS by manually.
[11:13:42] <lqez> Or you can use marketplace for just tasting mongodb
[11:13:46] <lqez> https://aws.amazon.com/marketplace/search/results/ref=gtw_navgno_search_box?searchTerms=mongodb
[11:14:13] <giuseppesolinas> lqez, I need to mount manually
[11:14:34] <giuseppesolinas> I get the error when sudo mkfs.ext4 /dev/xvdf
[11:15:00] <lqez> did you create an instance via '$ ec2-run-instances ami-05355a6c -t m1.large -g [SECURITY-GROUP] -k [KEY-PAIR] -b "/dev/xvdf=:200:false:io1:1000" -b "/dev/xvdg=:25:false:io1:250" -b "/dev/xvdh=:10:false:io1:100" --ebs-optimized true'
[11:15:02] <lqez> ?
[11:15:11] <giuseppesolinas> I've read that it could have different names but I don't want to overwrite useful stuff
[11:15:36] <giuseppesolinas> lqez, no, I am working on a pre-existing instance
[11:15:42] <giuseppesolinas> running fedora
[11:16:12] <lqez> that command means 'creating an m1.large instances with 3 ebs volumens',
[11:16:44] <lqez> so on your pre-existing instances probably have much different settings.
[11:17:16] <giuseppesolinas> I see
[11:17:24] <lqez> and the document is just a case for using 3-different EBSs to get better performance than using only 1 EBS.
[11:17:39] <giuseppesolinas> so I might as well use just one
[11:17:50] <giuseppesolinas> or even store data on the main fs
[11:17:53] <lqez> yop no problem. :)
[11:19:23] <lqez> giuseppesolinas: and do not change /etc/mongod.conf as like as example on that page.
[11:19:53] <lqez> it's for putting logs and data into separate volumes.
[11:19:56] <lqez> separated
[11:20:11] <giuseppesolinas> lqez, I'm tryin to figure out whether if there are any ebs volumes
[11:23:32] <giuseppesolinas> ok, so I don't have any
[11:23:53] <giuseppesolinas> I'll just use the standard ami fs
[11:25:24] <giuseppesolinas> lqez, btw yes, I should change mongod.conf if I've set up those folders
[11:25:57] <lqez> yes it depends on your env.
[11:30:19] <giuseppesolinas> what about the ulimit part?
[11:31:57] <lqez> Recommended. http://docs.mongodb.org/manual/reference/ulimit/
[11:38:18] <giuseppesolinas> is it normal for mongod to take a while to boot up? A couple of minutes and it hasn't booted up yet
[11:39:10] <giuseppesolinas> ok, it was first run, took a while but now it's normal
[11:43:01] <kali> giuseppesolinas: you may want to check that you've put the data on a ext4 partition and not ext3
[11:43:27] <giuseppesolinas> kali, I think it should be ec4
[11:44:03] <giuseppesolinas> otherwise I won't be able to change it because it's the main partition and I have this ami
[11:44:44] <kali> you DONT want to run prod on ext2 or ext3
[11:47:01] <giuseppesolinas> kali, that's unlikely, if so , it's not very much of my business
[11:49:59] <giuseppesolinas> of course, if I am running my app and my database on the same machine I should refer to the mongo address as "localhost", right?
[11:52:49] <lqez> sure-
[12:30:45] <adrian_lc> hi, how can I use 2.4 authentication with a 2.6 installation
[12:31:55] <adrian_lc> I'm getting this error no matter what I do Error: couldn't add user: User and role management commands require auth data to have schema version 3 but found 1 at src/mongo/shell/db.js:1004
[12:32:40] <adrian_lc> I don't wanna update the auth schema cause ansible doesn't seem to support the new version yet
[12:51:11] <tommy_the_dragon> I want to do an update using the java driver. I have the JSON for the update I wish to make in a string. Is there an easy way to turn that into a DBObject?
[12:54:04] <tommy_the_dragon> I'm probably struggling with the basics on how to use the driver. Can someone link me a decent tutorial?
[12:54:51] <tommy_the_dragon> something concise
[12:56:22] <tommy_the_dragon> Of course I can work with what I have, but I'd rather not re-invent the wheel
[12:57:34] <kali> tommy_the_dragon: the driver is not meant to be used with json actually
[12:57:57] <StephenLynx> just came in, what are you guys talking about?
[12:58:19] <kali> StephenLynx: http://irclogger.com/.mongodb
[12:58:40] <StephenLynx> neat :V
[13:05:54] <tommy_the_dragon> kali: well, how am I supposed to deal with the result of a query? Maybe that's a better question.
[13:08:26] <tommy_the_dragon> is using toMap and putAll a good way to do an update?
[13:10:57] <kali> work the DbObject, yes
[13:11:17] <kali> either pull one, modify it and save it, or make a new DbObject with your modifiers
[13:12:52] <tommy_the_dragon> That's what I mean, what's the best way to modify/pull values from them, when subdocuments are involved too
[13:13:53] <kali> toMap
[13:14:20] <tommy_the_dragon> then cast subdocuments to BSONObject?
[13:15:14] <kali> also, note that if you build modifiers objects, you can make the as BasicDBObject, and they implement Map<> already
[13:16:16] <tommy_the_dragon> or Map?
[13:25:27] <cheeser> i wouldn't use BSONObject directly
[15:40:00] <jiffe> Mon Jan 12 08:25:24.159 Invalid access at address: 0x706eeb27dcc8 from thread: repl writer worker 1
[15:40:05] <jiffe> is that a known issue in 2.4.12
[15:41:49] <StephenLynx> I just saw some benchmarks where postgresql nosql tools outperformed mongodb by over 2 twice the speed. anyone here has something so say about that or knows if anything changed in the last year? or does really postgre outperforms mongo?
[15:42:31] <StephenLynx> over 2 times the speed*
[16:10:58] <theRoUS> i'm having trouble formulating a query using regexes. i want to find all records where {'short_description':/CRITICAL/} AND {'$not':{'short_description':/ACK/}}
[16:11:36] <kexmex> { $and : [ ..... ] }
[16:12:23] <theRoUS> kexmex: isn't $and implicit? i've tried "{'short_description':/CRITICAL/,'$not':{'short_description':/ACK/}}" but get no records
[16:12:32] <kexmex> no
[16:12:43] <kexmex> i dunno
[16:13:18] <kexmex> i always use $and, maybe you are right :)
[16:13:31] <kexmex> have you tried those queries individually?
[16:15:02] <theRoUS> mmm
[16:15:38] <theRoUS> yah, the '$not':{'short_description':/ACK/} returns null, so that must not be the right syntax
[16:16:18] <kexmex> maybe $not without quotes?
[16:16:49] <theRoUS> no difference
[16:16:58] <Derick> 'short_description' : { '$not' : /ACK/' } ?
[16:17:09] <kexmex> oh
[16:17:19] <Derick> you probably would like to stay away from regular expressions like that though, as they can't use an indexed lookup
[16:18:02] <kexmex> the first one will make the subset smaller i'd guess
[16:18:06] <kexmex> first part of it
[16:18:27] <kexmex> or the "like" cannot use index?
[16:18:55] <theRoUS> Derick: that did it.
[16:18:56] <kexmex> what if it's /^BLA/?
[16:19:08] <Derick> kexmex: then an index can be used
[16:19:20] <theRoUS> however, i don't have a lot of choice on the regex stuff
[16:19:23] <Derick> (for that criterium)
[16:19:28] <kexmex> i see
[16:19:31] <Derick> theRoUS: you can pre-filter - when you're inserting
[16:19:52] <theRoUS> this is query only, no insertions
[16:20:06] <kexmex> you can adda nother column
[16:20:07] <Derick> something must have inserted the data?
[16:20:15] <kexmex> err another field
[16:20:20] <Derick> kexmex: we call them "fields" instead of columns..
[16:20:22] <Derick> right :D
[16:20:29] <kexmex> tables!!
[16:20:31] <kexmex> :)
[16:20:32] <theRoUS> Derick: so how do i combine the two regexes?
[16:20:36] <cheeser> columns? what are columns, precious?
[16:20:50] <Derick> theRoUS: how would you do it in perl?
[16:21:09] <kexmex> $and : [ .. ] doesn't work?
[16:21:15] <theRoUS> the 'short_description' field is very, very, very multi-valued
[16:21:50] <theRoUS> Derick: (($short_description =~ /CRITICAL) && ($short_description !~ /ACK/))
[16:22:21] <theRoUS> it's two separate selectors
[16:22:57] <Derick> and I think so must you in mongodb then
[16:23:58] <StephenLynx> I remember using $and with regex
[16:24:06] <StephenLynx> hold on, I will look for it
[16:25:07] <theRoUS> comprends.. but how? 'short_description' can't be the selector key, since regexes are dyadic. can't say 'short_description':{'$and':[{'$not':/ACK/},/CRITICAL/]} can i?
[16:26:03] <StephenLynx> found it. hgold on
[16:26:20] <theRoUS> error: { "$err" : "invalid operator: $and", "code" : 10068 }
[16:27:00] <StephenLynx> https://gitlab.com/mrseth/bck_leda/blob/master/pages/search_user.js line 32
[16:28:09] <theRoUS> oh, so you *do* refer to the field multiple times.
[16:28:22] <Derick> yes, you have to in this case, with $and
[16:29:02] <kexmex> anyone heard of mongodb running on docker?
[16:29:12] <Derick> db.col.find( { $and : [ { 'short_description': /CRITICAL/ }, { 'short_description': { $not: /ACK/ } } ] } );
[16:29:39] <StephenLynx> no need for /XXX/
[16:29:42] <StephenLynx> just the string
[16:29:49] <kexmex> why?
[16:30:01] <StephenLynx> wait, wait
[16:30:13] <StephenLynx> can you use // instead of the regex operator?
[16:30:21] <Derick> on the shell, yes
[16:30:23] <StephenLynx> oh
[16:32:10] <theRoUS> Derick, StephenLynx: got it. that works. probably totally ignores the index on that field, but once cached it's pretty fast. about a second to pick out 26_000 records from 100_000
[16:33:08] <StephenLynx> yeah, using a regular list for an AND gate is not very intuitive.
[16:33:37] <StephenLynx> and yes, from what I remember from the docs, regex does not uses indexes.
[16:38:34] <kexmex> so like
[16:38:41] <kexmex> Docker + MongoDB
[16:38:42] <kexmex> anyone?
[16:45:07] <Derick> StephenLynx: regex anchored to the start of the strin uses index, anything with $not does not
[16:45:15] <StephenLynx> hm
[16:46:12] <StephenLynx> what about the regex operator?
[16:46:29] <StephenLynx> can you anchor it to the start of the string?
[16:47:21] <StephenLynx> outside the CLI
[16:48:07] <Derick> StephenLynx: sure, by having the regxep have ^ at the start
[16:48:15] <StephenLynx> hm
[16:48:30] <StephenLynx> will keep that in mind
[16:48:32] <Derick> ie, /^foo/
[16:48:51] <StephenLynx> can't you use // only in the CLI?
[16:49:16] <Derick> correct
[16:49:23] <Derick> other language have an operator
[16:49:35] <StephenLynx> if I use $regex : {field: '^foo'} will it understand it is anchored to the start of the string?
[16:50:23] <Derick> yes
[16:50:59] <StephenLynx> hmm
[17:42:26] <anybroad_> Is this also the right channel about mongoose (a mongodb nodejs library)?
[17:43:04] <kali> you can try
[17:46:59] <StephenLynx> but don't get frustrated if no one is able to help you.
[17:47:26] <StephenLynx> because stuff like mongoose change way too much how you deal with the database itself.
[17:49:10] <anybroad_> oh
[17:49:16] <anybroad_> how would one deal with it without mongoose
[17:49:41] <anybroad_> For example, I want to assign categories from categories selection to books in books collection.
[17:49:42] <anybroad_> n:1 relation
[17:49:54] <anybroad_> I guess this is not possible with mongodb?
[17:49:59] <anybroad_> How to handle it instead then?
[17:50:22] <StephenLynx> I just do it.
[17:50:31] <StephenLynx> can't why I couldn't be able to do it
[17:50:37] <cheeser> is the category simply a name?
[17:50:58] <StephenLynx> all data is just a json nugget, you just jam it on a collection
[17:51:14] <anybroad_> that's the point
[17:51:27] <StephenLynx> then how wouldn't I be able to jam it into a collection?
[17:52:08] <StephenLynx> oh, are you talking about relations?
[17:52:12] <anybroad_> yes
[17:52:14] <StephenLynx> mongoose don't actually makes it related
[17:52:18] <anybroad_> the category should be the same
[17:52:19] <StephenLynx> it just pretend it is
[17:52:35] <StephenLynx> and enforces this relation
[17:52:39] <anybroad_> with other words, when I change something in category, it should be changed for the category the books got, too.
[17:52:50] <anybroad_> so those categories are global, they are assigned
[17:52:56] <anybroad_> denormalized (right word for this?)
[17:52:58] <StephenLynx> yeah, you can just do that
[17:53:08] <StephenLynx> you would just have to do it manually
[17:53:19] <StephenLynx> or use a relational database since the problem requires a relational database
[17:53:24] <StephenLynx> I would go with option B
[17:53:37] <StephenLynx> instead of a travesty of relational db like mongoose
[17:54:10] <anybroad_> Pardon my ignorance, what would be the use cases for mongodb? I heard about it and like its flexibility - how can I make it useful for me?
[17:54:18] <StephenLynx> performance, for one.
[17:54:47] <StephenLynx> data that is not very related
[17:55:01] <StephenLynx> data that is related 1:n
[17:55:11] <StephenLynx> so you can just make it a field of something
[17:55:38] <StephenLynx> when you have too many n:n then you need a relational DB
[17:56:24] <StephenLynx> from what I heard it is easy to just add more servers to a cluster with mongo and you don't even have to stop the servers
[17:56:30] <anybroad_> ah, nice
[17:56:43] <anybroad_> but I will need this when I got many, many visitors and so a highload on the database
[17:57:01] <StephenLynx> ok
[17:57:24] <StephenLynx> you maybe should try and design your db in a non-relational way
[17:57:46] <StephenLynx> and use a cache for authentication
[17:58:35] <anybroad_> So I assign categories to books.
[17:58:37] <StephenLynx> https://gitlab.com/mrseth/bck_leda/blob/master/doc/model.txt
[17:58:42] <anybroad_> Each category got translation in different languages
[17:58:53] <anybroad_> so when I change something in the category I would have to update it for each book, right?
[17:59:05] <anybroad_> So I need some hook-update system which does this updating each time the category is modified
[17:59:11] <StephenLynx> you can just query for all books that belongs to said category
[17:59:16] <StephenLynx> and perform an update on them
[17:59:30] <anybroad_> hm, interesting
[17:59:35] <anybroad_> does this take more time then?
[17:59:39] <StephenLynx> I would just have one field
[17:59:39] <anybroad_> *processing time
[17:59:43] <StephenLynx> in the books
[17:59:50] <StephenLynx> that would be the category name
[18:00:05] <StephenLynx> so you would need only to do that when changing the category name
[18:00:53] <anybroad_> so no id or object id thing, but instead just the string of category
[18:01:07] <anybroad_> so I do two queries, one for the category, one for the book?
[18:01:09] <StephenLynx> as long as it is unique.
[18:01:17] <anybroad_> What is about mongoose? I use it to enforce schemas + validation.
[18:01:20] <StephenLynx> a name is much more readable
[18:01:24] <anybroad_> yes
[18:01:37] <StephenLynx> as I said, mongoose only pretends it is relational
[18:01:42] <anybroad_> ah
[18:01:45] <anybroad_> so I can still use it?
[18:01:47] <StephenLynx> if you really needs a relational database, use a relational database
[18:01:51] <StephenLynx> of course
[18:01:55] <StephenLynx> I would'nt
[18:02:06] <StephenLynx> but you could if you wish
[18:02:08] <anybroad_> So rather use raw mongo?
[18:02:17] <StephenLynx> yes, that 's what I do.
[18:02:17] <anybroad_> I am new to this so I am glad for some directions.
[18:02:22] <StephenLynx> see my model.txt
[18:02:33] <StephenLynx> that's how I design non-relational databases
[18:02:44] <anybroad_> ah
[18:02:54] <anybroad_> and this model.txt is parsed by mongodb and used as schema?
[18:02:57] <StephenLynx> no
[18:03:01] <StephenLynx> it is just documentation
[18:03:08] <anybroad_> ok
[18:03:14] <StephenLynx> I just gave you that so you could understand how I do it.
[18:03:20] <anybroad_> nice, thanks
[18:03:31] <StephenLynx> and I work in a way that I don't have to relate the data too.
[18:03:32] <anybroad_> so schema enforcement and validation is done by the programmer?
[18:03:44] <StephenLynx> yes.
[18:03:52] <StephenLynx> the database itself doesn't do it.
[18:04:02] <StephenLynx> is either the programmer or a layer on top of the db like mongoose.
[18:04:38] <anybroad_> so you are using the login and when you encounter a field with login you do a 2nd lookup to find the associated login of it?
[18:05:00] <StephenLynx> yes, but I try to work in a way that I don't have to do that.
[18:05:23] <StephenLynx> so in posts you only have the poster login
[18:05:33] <StephenLynx> that is already in a field of the post
[18:05:37] <StephenLynx> for example
[18:05:50] <StephenLynx> if you wish to see the profile of the user, then I make this second lookup
[18:06:13] <StephenLynx> and yes, that is a consequence of using noSQL
[18:06:37] <StephenLynx> you either shape your user interface because of it or you make tons of queries
[18:06:43] <StephenLynx> because you don't have joins.
[18:07:01] <StephenLynx> at least with mongo, I haven't used other noSQL dbs.
[18:07:47] <StephenLynx> that's why noSQL is not an universal solution.
[18:08:00] <anybroad_> thanks for the explanation
[18:08:01] <cheeser> "nosql" is a meaningless term, though.
[18:08:30] <StephenLynx> non-relational db would be better? probably. but it is way longer and I'm lazy
[18:09:00] <cheeser> you can model relational data in mongo just fine.
[18:09:08] <anybroad_> cheeser: how are you doing it?
[18:09:13] <StephenLynx> really? how.
[18:09:25] <StephenLynx> can you make foreign keys?
[18:09:29] <cheeser> i use references or embedding. depends on how the data is used and changed.
[18:09:31] <cheeser> DBRef
[18:09:45] <cheeser> or just store the ID like you would in an RDBMS
[18:09:52] <StephenLynx> dbref?
[18:10:02] <cheeser> there's just no validation on the FKs
[18:10:11] <cheeser> http://docs.mongodb.org/manual/reference/database-references/
[18:10:53] <StephenLynx> http://stackoverflow.com/questions/9412341/mongodb-is-dbref-necessary "Dbref in my opinion should be avoided when work with mongodb, at least if you work with big systems that require scalability.
[18:10:53] <StephenLynx> As i know all drivers make additional request to load DBRef, so it's not 'join' within database, it is very expensive. "
[18:11:00] <StephenLynx> so yeah, nah.
[18:11:46] <anybroad_> so one normalizes the data in nosql in a way that multiple queries aren't needed?
[18:12:06] <StephenLynx> yes
[18:12:16] <anybroad_> that's the opposite of what rdbms do usually.
[18:12:19] <StephenLynx> yes
[18:12:25] <anybroad_> interesting, a difference between nosql and rdbms
[18:12:29] <StephenLynx> indeed.
[18:12:42] <StephenLynx> learn both because both have their very necessary use cases.
[18:12:58] <StephenLynx> relational dbs are as needed as non relational.
[18:13:16] <anybroad_> so updating or inserting in nosql database requires traversing and updating the related records, right?
[18:13:20] <StephenLynx> yes.
[18:13:53] <StephenLynx> so you deal with the lack of relations by keeping your relations to a minimum
[18:14:05] <StephenLynx> because if you start relating, you will have performance issues.
[18:14:23] <StephenLynx> and will just be using a crippled version of what relational databases already do.
[18:14:29] <cheeser> that SO post is ... interesting.
[18:15:09] <StephenLynx> it doesn't surprises me. mongodb is not designed for relations, period. even if it implements the semantics, it does not work around that.
[18:15:26] <cheeser> and by interesting i mean misguided.
[18:15:40] <anybroad_> SO post = Stack Overflow post?
[18:15:52] <StephenLynx> ok, what would be your counter-point to that post?
[18:15:53] <cheeser> anybroad_: yes
[18:16:24] <StephenLynx> because both you and his posts don't give any data, but his makes more sense because of how mongo is designed.
[18:16:36] <cheeser> there's nothing inherently wrong with DBRefs. the choice to use embedded documents versus references is a bit more nuanced than "never do this" or "always do that"
[18:16:56] <StephenLynx> but it stills just makes additional queries?
[18:17:03] <StephenLynx> still just*
[18:17:27] <cheeser> what is "it?"
[18:18:12] <StephenLynx> "To resolve DBRefs, your application must perform additional queries to return the referenced documents. Many drivers have helper methods that form the query for the DBRef automatically. The drivers [1] do not automatically resolve DBRefs into documents."
[18:18:15] <StephenLynx> official docs
[18:18:32] <StephenLynx> dbrefs are just making what you would have to do manually
[18:18:45] <StephenLynx> so his point got proven.
[18:18:54] <cheeser> not particularly so, no.
[18:19:14] <cheeser> it can be perfectly valid to model your data like that in mongo provided you don't need those referenced docs all the time.
[18:19:26] <cheeser> you can think of them as lazy fetches in something like JPA
[18:19:40] <StephenLynx> the point is not that. is the lost of performance because you are not using a tool designed for that job.
[18:20:24] <StephenLynx> if you are not referencing the docs all the time, you might as well just use as little of the relations as I suggested.
[18:20:41] <cheeser> the point is he's making generalized, sweeping pronouncements absent ant context in which to make an informed decision.
[18:21:09] <cheeser> StephenLynx: that's different than saying "never use DBRef"
[18:21:17] <StephenLynx> ok, forget the SO post.
[18:21:18] <cheeser> i'm saying be smart about it based on your app
[18:21:32] <StephenLynx> mongo still is not designed around relations.
[18:21:43] <StephenLynx> and anything you use is just a travesty in the literal sense of the word.
[18:21:47] <cheeser> is mongo a bad fit? maybe. but maybe not. and no single sound bite from an internet forum is going to adequately answer that question.
[18:22:03] <cheeser> see that's just wrong.
[18:22:16] <StephenLynx> it still just performs additional queries. THE OFFICIAL DOCS said it.
[18:22:22] <cheeser> it *can* be depending on access patterns and usage.
[18:22:37] <cheeser> additional queries area problem in RDBMSes, too.
[18:22:56] <cheeser> it depends on your app and how it uses the data.
[18:22:59] <StephenLynx> yeah, w/e
[18:23:06] <StephenLynx> I think I got my point across already.
[18:23:41] <anybroad_> alright, so when I add a new book, some code would update the category property in book j/bson, too.
[18:23:53] <anybroad_> and when I update a category, the code must go through each book and update the fields I changed in category
[18:24:07] <anybroad_> but I spare a lookup when fetching books (no pun intended), as the category is already there
[18:25:16] <cheeser> that's one way, yes.
[18:25:31] <StephenLynx> or you just have one field in the book for the category and you don't update the book.
[18:25:35] <StephenLynx> just the category
[18:25:56] <StephenLynx> and when fetching the books, just fetch it's category
[18:26:00] <drager> How do I make mongodb accept external ips? Because I have one website running that db and a site and I have one android application which needs that data from that server. But if I accept external ips will that result in a security flaw?
[18:26:07] <StephenLynx> you have to bind.
[18:26:22] <cheeser> drager: enable authentication in mongodb
[18:26:24] <StephenLynx> there is a command, but if you look for mongo ip binding it should appear easily
[18:26:48] <StephenLynx> the problem there is that your fron-end is connecting to your database
[18:26:58] <StephenLynx> that is bad design
[18:27:17] <anybroad_> StephenLynx: so a second lookup would be necessary but wouldn't be very expensive as mongodb queries are very fast?
[18:27:21] <drager> cheeser: I'm reading here; http://docs.mongodb.org/manual/administration/security/#SecurityandAuthentication-Ports
[18:27:44] <StephenLynx> a good architecture is to have an application running on the server that will accept http requests, query the database and output a response.
[18:28:09] <StephenLynx> anybroad_ a second lookup? if you wish to see the details of the category, yes.
[18:28:25] <StephenLynx> otherwise, no. you just display the category of the book.
[18:28:28] <anybroad_> StephenLynx: ah, I use the category name in category field, so I know it already
[18:28:39] <anybroad_> StephenLynx: what is about a slug (urlized/shorted name)?
[18:28:46] <anybroad_> or do I simply put both into that field?
[18:29:02] <StephenLynx> I would use what I want to show in the GUI already
[18:29:22] <anybroad_> StephenLynx: as I do the same (the problem there is that your fron-end is connecting to your database), how should this function then?
[18:29:28] <StephenLynx> so I don't have to have two fields and update the books if I change the display name
[18:29:48] <StephenLynx> you are not connecting the FE to the database, you are just using an unique field as any other
[18:29:59] <anybroad_> StephenLynx: ok, so only the category name, and 2nd query is performed to look up the slug of category.
[18:30:01] <StephenLynx> you just happen to use the field that is the display name of the category
[18:30:11] <StephenLynx> slug? details of the category?
[18:30:28] <anybroad_> slug means in this case a short name of category to be used in urls for example
[18:30:40] <anybroad_> long category thing -> short-category
[18:31:29] <StephenLynx> I would either use the long name or use the body of the request for the parameters instead of the url
[18:32:14] <anybroad_> StephenLynx: pardon my ignorance, 'body of the request'?
[18:32:22] <StephenLynx> yeah
[18:32:26] <StephenLynx> like in a post request
[18:32:42] <StephenLynx> it doesn't use the url for the request, it sends a string in the body of it
[18:33:51] <StephenLynx> sec, I will get my doc
[18:34:05] <StephenLynx> https://gitlab.com/mrseth/lynxchan/blob/master/doc/Back-end%20interface.txt
[18:34:09] <StephenLynx> this is what I do
[18:34:18] <StephenLynx> I send a serialized JSON in the body of the request
[18:35:06] <anybroad_> StephenLynx: in this case it is not an app but a website, so I have to use slugs or normal urls for the user/seo
[18:35:23] <StephenLynx> not true.
[18:35:35] <StephenLynx> you can just make http requests using javascript
[18:36:33] <StephenLynx> https://gitlab.com/mrseth/lynxchan/blob/master/src/fe/js/common.js line 704
[18:36:56] <StephenLynx> but
[18:37:07] <StephenLynx> if you really, really want to have something url friendly
[18:37:15] <StephenLynx> and something readable
[18:37:21] <StephenLynx> I would save both on the books
[18:37:21] <drager> cheeser: Shouldnt it work just binding mongo to 0.0.0.0 for it to accept external ips?
[18:37:54] <StephenLynx> drager if you accept external ips make sure you enable authentication. but you really should just put a webserver running to perform the queries
[18:37:56] <cheeser> it should, yes.
[18:39:28] <drager> Not working for me, maybe iptables are the problem now
[18:39:36] <drager> Yeah I will add auth as I get it working
[18:39:50] <drager> StephenLynx: How can I make a webserver doing it in an android app?
[18:40:08] <StephenLynx> you will have to perform http requests on the android app to the server.
[18:40:15] <cheeser> make http requests to the server, server code processes and responds.
[18:40:29] <StephenLynx> I use node for my servers.
[18:40:40] <drager> Hm, because its an meteor applicaiton that uses phonecap
[18:40:42] <drager> gap™
[18:40:45] <drager> *
[18:40:51] <drager> (node)
[18:41:06] <StephenLynx> you can do http requests with phonegap.
[18:41:15] <drager> Yeah, but meteor build the app for me
[18:41:18] <drager> and fixing all that
[18:41:19] <drager> for me
[18:41:24] <StephenLynx> I know because my company has some legacy apps that use that and they do http requests
[18:41:33] <StephenLynx> no matter what you use, you can do http requests
[18:42:06] <drager> Yeah, but I'm cant decide where it connects to mongo, just the url
[18:42:09] <anybroad_> StephenLynx: right, web apps are based on javascript, they wouldn't work without it, (plain html), right?
[18:42:10] <drager> thats the problem really
[18:42:59] <StephenLynx> anybroad_ you are accounting for no javascript?
[18:43:02] <StephenLynx> at all?
[18:43:06] <anybroad_> no
[18:43:08] <anybroad_> eh, yes
[18:43:18] <anybroad_> Normal website, with links and such.
[18:43:28] <anybroad_> So google can crawl and index them.
[18:43:39] <StephenLynx> afaik, you can still have webcrawlers
[18:43:51] <StephenLynx> you just have to do something
[18:43:57] <StephenLynx> that I can't remember
[18:44:08] <StephenLynx> so they will have something to scan
[18:44:08] <anybroad_> aha
[18:44:15] <StephenLynx> but your page is javascript based
[18:44:16] <anybroad_> static fallback links?
[18:44:20] <anybroad_> those pages still need static?
[18:44:21] <StephenLynx> don't know
[18:44:28] <StephenLynx> never did, I don't care abour crawlers
[18:44:40] <anybroad_> what is about ranking and finding content?
[18:44:48] <anybroad_> but we are talking about web _apps_, right?
[18:44:57] <StephenLynx> web apps is just javascript.
[18:45:04] <cheeser> some of them anyway
[18:45:18] <StephenLynx> well, if you don't use dumb shit like flash
[18:45:22] <StephenLynx> or silverlight
[18:45:37] <cheeser> webapps have more to them than the html on the front end
[18:45:43] <anybroad_> ah
[18:46:07] <StephenLynx> I only use MVC architectures relying on front-end javascript
[18:46:11] <anybroad_> I use javascript for normal websites for polyfilling missing browser feature (e.g. css, etc)
[18:46:16] <StephenLynx> if the user don't have it enabled, though luck
[18:46:33] <StephenLynx> I never generate HTML on the back-end
[18:46:57] <StephenLynx> but that is just me and I don't do web front-end for a living.
[18:47:19] <StephenLynx> but I know you can have js based pages that crawlers identify
[18:47:36] <StephenLynx> don't drop js based web front end because of that.
[18:47:55] <StephenLynx> you may have reasons to do it, like wanting every user to be able to use the page even without js.
[18:48:01] <StephenLynx> but crawlers are not a good reason.
[18:49:21] <anybroad_> hm, ok
[18:49:40] <anybroad_> I use javascript for interactivity and filling missing browser functionality (polyfill) and it is very handy for this.
[18:49:53] <anybroad_> But it can be disabled and the html/css and link stuff would still be functional.
[18:50:02] <anybroad_> Something like express for node does for example.
[18:50:19] <StephenLynx> express is just a framework
[18:50:27] <StephenLynx> node is doing the work
[18:50:30] <StephenLynx> when you use express
[18:50:44] <StephenLynx> oh
[18:50:53] <StephenLynx> I misread, I read "express or node"
[18:51:14] <StephenLynx> and node can still do whatever you do with express. don't think express is a necessity
[18:51:28] <anybroad_> you mean using connect?
[18:51:34] <anybroad_> yes, I use only parts of express for routing and such
[18:51:40] <anybroad_> most functionality comes from other modules
[18:51:56] <anybroad_> but when the future is javascript thick clients for the browser, using the dom (html) only for some rendering stuff, ok
[18:52:03] <anybroad_> I need to know this to be able to re-focus.
[18:52:17] <StephenLynx> I don't use express.
[18:52:20] <StephenLynx> i despise it.
[18:52:57] <StephenLynx> it is the jquery of back-ends.
[18:53:21] <StephenLynx> bloated, useless and do everything you can do with vanilla.
[18:53:39] <StephenLynx> and vanilla can do anything it does already*
[18:55:12] <anybroad_> StephenLynx: ok, that's interesting. So app.use(function(req,res,next){ can be replaced by some other middle-ware module?
[18:55:24] <StephenLynx> yeah
[18:55:28] <StephenLynx> or by nothing
[18:55:39] <anybroad_> or this? app.set('views', path.join(__dirname, 'views'));
[18:55:42] <anybroad_> interesting
[18:55:48] <anybroad_> I would like to get rid of all unneeded stuff.
[18:55:53] <StephenLynx> in fact, "app" does not exists outside express.
[18:55:56] <anybroad_> yes, express was split into separate stuff anyway
[18:56:00] <anybroad_> aha
[18:56:40] <StephenLynx> node just executes the function you gives it when you start listening to a connection
[18:56:50] <StephenLynx> and you do anything you want from there
[18:58:08] <StephenLynx> express just hides it under a bunch of crap that nodes have to dig through to reach your code
[19:01:03] <anybroad_> StephenLynx: So I use express-validator, express-session, express-error-handler and express-enrouten
[19:01:14] <anybroad_> But those are standalone modules, they had been taken out from express to do one thing.
[19:01:25] <anybroad_> I can still use them as they only bear the name 'express'?
[19:05:27] <drager> StephenLynx: I have a webserver running with websockets pushing the data but Im currently getting; (android:http://meteor.local/:0) XMLHttpRequest cannot load http://domain.se/sockjs/info?cb=firnycyccl. Origin http://meteor.local is not allowed by Access-Control-Allow-Origin.
[19:14:55] <StephenLynx> drager yeah, CORS.
[19:15:09] <StephenLynx> in the webserver you must add the CORS permissions to the response header
[19:15:11] <StephenLynx> just a second
[19:15:24] <StephenLynx> that is a fucking pain in the ass indeed when you are dealing with browsers
[19:16:12] <StephenLynx> https://gitlab.com/mrseth/lynxchan/blob/master/src/be/operations.js line 194
[19:16:28] <StephenLynx> is what I do when I want a headers that will just work on browsers.
[19:16:38] <StephenLynx> you can add complex rules to it though if you want something more secure
[19:16:50] <drager> Yeah, I'm aware of CORS
[19:16:57] <drager> this is a meteor problem so I will talk to them
[19:16:59] <StephenLynx> anybroad_ I don't know, I don't care, I wouldn't touch it with a mile long pole.
[19:17:09] <drager> because my android app should set the url to mydomain.com
[19:17:12] <drager> and it does not
[19:17:15] <drager> so i will fix it
[19:17:20] <drager> thanks though for the help StephenLynx
[19:17:23] <StephenLynx> np
[19:17:28] <anybroad_> StephenLynx: what are you using for routing + form validation?
[19:17:32] <anybroad_> *would you use
[19:17:37] <StephenLynx> in fact
[19:17:41] <StephenLynx> I do routing
[19:17:44] <StephenLynx> and I just use node.
[19:18:09] <StephenLynx> the url gives you everything you need.
[19:18:12] <StephenLynx> subdomain, path
[19:18:54] <anybroad_> ah
[19:19:03] <anybroad_> what npm modules are you using if I may ask?
[19:19:07] <anybroad_> *for routing
[19:19:17] <StephenLynx> none
[19:19:47] <StephenLynx> https://gitlab.com/mrseth/lynxchan/blob/master/src/boot.js line 314
[19:20:45] <StephenLynx> or line 280
[19:20:47] <StephenLynx> 270*
[19:22:27] <StephenLynx> or line 217
[19:23:18] <anybroad_> thanks for the snippet
[19:23:51] <StephenLynx> I just fiddle with the URL
[19:24:11] <StephenLynx> vanilla has already enough functions to help you with that
[19:25:19] <anybroad_> StephenLynx: So the html in https://gitlab.com/mrseth/lynxchan/tree/master/src/fe are served statically as normal html to browser?
[19:25:31] <StephenLynx> yes
[19:25:41] <anybroad_> StephenLynx: and could you give me a line in your example where references are de-normalized? Because I want to learn more about this.
[19:26:10] <StephenLynx> line 181
[19:26:54] <anybroad_> StephenLynx: this?: https://gitlab.com/mrseth/lynxchan/blob/master/src/boot.js#L181
[19:27:03] <StephenLynx> is that what you meant?
[19:27:11] <StephenLynx> when I use an alias for a file?
[19:27:20] <StephenLynx> in a path that does not exist?
[19:28:53] <StephenLynx> in those cases the url is domain/boardName/something
[19:28:59] <StephenLynx> the folder boardName does not exists
[19:29:15] <anybroad_> StephenLynx: e.g. when a login is removed and then this login also removed from boards staff field
[19:29:25] <StephenLynx> oh
[19:29:27] <StephenLynx> that
[19:29:36] <StephenLynx> I haven't implemented account deletion yet
[19:29:44] <StephenLynx> but
[19:29:55] <StephenLynx> I got it, hold on
[19:30:23] <StephenLynx> i will link you my other project
[19:30:38] <anybroad_> StephenLynx: because I would have to do the same when a category is removed, the products must forget that category then or even removed altogether.
[19:31:30] <StephenLynx> https://gitlab.com/mrseth/bck_leda/blob/master/pages/delete_forum.js line 18
[19:31:51] <StephenLynx> in your case you would have to remove the field, there is an operator for that
[19:32:03] <StephenLynx> in my case I remove the forum from the list of forums the user has joined
[19:33:23] <StephenLynx> https://gitlab.com/mrseth/lynxchan/blob/master/src/be/public/setGlobalRole.js line 24
[19:33:25] <StephenLynx> unset
[19:33:40] <StephenLynx> it deletes a field from an object
[19:34:38] <StephenLynx> you would have to deal with books with undefined category.
[19:34:49] <StephenLynx> if you implement category deletion
[19:35:02] <StephenLynx> or pick a category and set it instead
[19:35:06] <StephenLynx> whatever floats your goat.
[19:37:38] <LuckyBurger> hey quick Q: if i have a node.js object with functions and i save it to a mongodb collection, when I pull it back out will it still have the prototype functions still on the object ?
[19:37:59] <cheeser> what happended when you tried?
[19:38:10] <LuckyBurger> havent, trying to plan out my object structure
[19:38:37] <StephenLynx> no idea. I wouldn't consider it good design though.
[19:39:08] <StephenLynx> because then your database is coupled to your back-end
[19:39:43] <LuckyBurger> yeah thats fine for me.
[19:40:04] <StephenLynx> you really should raise your standards.
[19:40:23] <LuckyBurger> thanks for your opinion.
[20:18:38] <winem_> hey guys, did anyone ever participate in the courses from mongo university? unfortunately, I was not able to watch all the videos of the first chapter, due to some critical issues at work.. chapter 2 starts tomorrow evening. would just like to know whether the videos and quizzes from chapter one will still be available after chapter 2 started
[20:19:38] <cek> what's the sense of having $elemMatch operator if you can just use {$gte: xx, $lt: yy} compound statement,for ex.,?
[20:20:50] <octoquad> hi winem_ I did, just completed homework for this week. Not sure if it will be, but maybe you can ask the question on the discussion tab.
[20:24:13] <octoquad> any mongodb developers here?
[20:24:20] <winem_> did not see it yet. thanks octoquad
[20:24:40] <octoquad> winem_, no prob :)
[20:31:56] <cek> How do I extract out field values out of array of embedded docs?
[20:32:09] <cek> that's an aggregate over unwind? any other method?
[20:46:38] <amdp> hello
[20:55:38] <centran> would anyone be able to help me with coping indexes from one server to another? (just the indexes)
[20:56:09] <centran> like recreate them
[21:02:15] <cek> for(var i = 0; i < users.length; ++i) {
[21:02:15] <cek> var x = users[i]; etc etc
[21:02:18] <amdp> hi
[21:02:25] <cek> is it possible to replace that with forEach or alike?
[21:03:08] <cek> oh, it is.
[21:06:00] <amdp> anyone who can help me in installing mongodb 2.6 on deb wheezy?
[21:06:42] <jiffe> is there a way to detect and repair/remove corruption?
[21:10:20] <cheeser> jiffe: mms can repair config servers.
[21:10:47] <jiffe> this isn't a config server
[21:11:19] <jiffe> one of my mongod's is crashing right after I start it and it looks like it happens right after it starts background syncing
[21:17:16] <cek> what's the channel for mongo support? not server, but queries?
[21:18:02] <jiffe> I don't think there is a different channel for that
[21:18:26] <cek> how do I get key names for embedded document?
[21:18:47] <cek> for THIS document: for (var key in this.products) { emit(key, null); } . What's the stanza for THIS.subdoc?
[21:19:04] <cek> oops, the above is for (var key in this) { emit(key, null); }
[21:19:18] <cek> i tried this.products, but it outputs INDEX, not key names
[21:22:18] <cek> found it! ----for (var key in this.products) { for (var kk in this.products[key]){ emit(kk, null); }}
[21:22:36] <cek> JS was a very bad choice, you should've picked ruby
[21:22:47] <cek> of course not python, it's impossible to code in it onliners
[21:23:19] <cheeser> https://coderpad.io/
[21:47:34] <cek> guy, why is mongo not resetting iterator when doing map and forEach?
[22:00:46] <sellout> Is { "$elemMatch": { "$eq": 4 } } the right thing to do? "$eq" doesn’t seem to be documented, but { "$elemMatch": 4 } errors with something like “argument to $elemMatch must be an object”.
[22:02:06] <cek> this chanel is dead i think
[22:04:25] <StephenLynx> {$elemMatch:{field:value}}
[22:04:38] <suupreme> Hi, I am using mongodb together with Go. I am writing some test now where I create a struct of a user. In the test I set the ObjectId. When I insert it in the test-db and try to query it back I can't by id but I can see the exact same user in there with exact same Hex of the id but in the test-struct it was a "1" at the end and in the db it became an "a" any ideas?
[22:05:10] <suupreme> so to summarize mongo replaced the last char in the object id, why?
[22:05:14] <StephenLynx> _id? mongo sets a value for it automatically
[22:05:42] <StephenLynx> you will have to deal with mongo doing stuff with the _id field. for one I don't touch it
[22:05:53] <cek> Is it possible to .find() and then .aggregate() the results? why not?
[22:06:04] <cek> i need to do a simple concat of 2 fields.
[22:06:06] <suupreme> StephenLynx: so _id is not a good "identifier"?
[22:06:39] <StephenLynx> because find returns an array of pointers, aggregate returns an array and both are methods of the collection poiter
[22:06:56] <StephenLynx> suupreme not for you to be messing with it, afaik.
[22:07:12] <StephenLynx> just get it back from the insert operation
[22:07:15] <suupreme> StephenLynx: I don't mess I just precreate an id with go
[22:07:25] <suupreme> just in my tests
[22:07:30] <suupreme> not possible I guess then
[22:07:40] <StephenLynx> probably is possible, you just have to deal with bullshit
[22:07:53] <StephenLynx> I use my own indexes, though
[22:08:47] <suupreme> StephenLynx: ok thanks, I will test the code in a different way
[22:08:48] <StephenLynx> I set whatever field I want as unique and pre-create, based on an incremented value somewhere
[22:08:48] <StephenLynx> because this huge _id is not very friendly to work in the first place
[22:09:09] <StephenLynx> and if you want to have any relation, you will have just an unreadable id instead of something like a name
[22:09:48] <StephenLynx> then you either make two readings for the readable name of the related object or you show the _id or show nothing at all
[22:10:17] <suupreme> I see thanks
[22:10:22] <StephenLynx> so lets say I have a list of forums the user joined. instead of making an array with _ids, I set the forum name as unique and make a list of forum names in the user.
[22:10:37] <StephenLynx> that way I can show the list without a second query based on the _id's
[22:13:27] <sellout> StephenLynx: But I have a field with an array of numbers … there are no subfields.
[22:16:58] <StephenLynx> hm
[22:17:19] <StephenLynx> your array of number is a subfield, isn't?
[22:17:41] <StephenLynx> because a collection is an array of objects in the first place
[22:18:14] <sellout> StephenLynx: But I can’t filter on $$ROOT, can I?
[22:19:43] <sellout> If I have a doc that looks like { …, nums: [4, 2, 1], …}, I want to aggregate([{ $match: { nums: { $elemMatch: { $eq: 4 } } } }, …]).
[22:21:30] <sellout> $eq seems to have appeared in 2.6, but I could find no mention of it, which makes me worry that it might break.
[22:25:06] <StephenLynx> im back
[22:25:48] <StephenLynx> show me your model
[22:26:49] <winem_> could need some help once more. wondering if it's just too late and I'm too tired.
[22:26:53] <winem_> I can connect to the mongos with a new user but it fails for mongoimport
[22:27:02] <winem_> mongoimport --db mc_mw --collection products --type json -p -u c_mw --port 37017 --file /tmp/products.json ... this return ins error 18 auth failed
[22:27:37] <winem_> a login with authentication agains the admin db works fine
[22:27:45] <winem_> so I guess I just mix up both dbs but try to fix this for 15 minutes now...
[22:28:23] <winem_> what am I doing wrong? I don't get it..
[22:30:14] <sellout> StephenLynx: I don’t have a model. I’m working on a SQL->MongoDB compiler (http://slamdata.com) and am basically implementing `where 4 in someField`. Zips will work, though: { $match: { loc: { $elemMatch: { $eq: 43.058514 } } } } is what I currently generate from `select * from zips where 43.058514 in loc`.
[22:30:50] <sellout> Previously, we just used a $where in this case, but I saw $eq used somewhere, and it worked.
[22:42:29] <StephenLynx> then why not just {someField: 4}?
[22:42:30] <winem_> got it. helped to read the documentation about the authorization process itself once again
[22:43:29] <cek> how do I count aggregated results?
[22:43:32] <sellout> StephenLynx: Can you write out the full $match, because I think I’m missing something.
[22:43:37] <cek> TypeError: Object #<Object> has no method 'size'
[22:44:54] <sellout> StephenLynx: someField doesn’t contain a number, it contains an array of numbers.
[22:45:00] <StephenLynx> cek: length, it is an array
[22:45:20] <StephenLynx> sellout sec, I will get a snippet from a project of mine
[22:45:22] <cek> ReferenceError: length is not defined
[22:45:26] <cek> TypeError: Object #<Object> has no method 'length'
[22:45:32] <sellout> StephenLynx: Thanks.
[22:45:37] <cek> db.coll.aggregate().length()
[22:46:06] <bmillham> cek: len(db.coll.aggregate()) should work
[22:46:23] <bmillham> Oh, opps, only if you are using mongoengine
[22:46:46] <StephenLynx> https://gitlab.com/mrseth/bck_leda/blob/master/operations.js line 310
[22:47:17] <StephenLynx> cek you don't do like that
[22:47:39] <StephenLynx> the aggregate method will receive a callback, in this callback it will have 2 arguments: error and results.
[22:47:52] <StephenLynx> results will hold the list which will have a length
[22:47:53] <cek> i'm using mongo cli
[22:47:58] <StephenLynx> oh
[22:48:02] <StephenLynx> then I got no idea :v
[22:48:06] <StephenLynx> oh, wait
[22:48:08] <cek> crap.
[22:48:20] <StephenLynx> then you just use db.col.aggregate() and it will print the results
[22:48:30] <StephenLynx> you want a count?
[22:48:30] <sellout> StephenLynx: Ah, interesting … well, at least that is documented :)
[22:50:32] <StephenLynx> cek try using group
[22:50:39] <StephenLynx> and $inc
[22:56:29] <sellout> StephenLynx: Thanks again – going to use $in :)
[22:56:45] <cek> .itcount()