[00:19:07] <kba> is there a reason why mongodb doesn't use sequential ids by default? I found a guide to changing it, which I was considering, since urls with a 24-length hex string isn't that nice
[00:20:21] <Razerglass> hey im using the find one function, and trying to print out the DB document it found, but im having trouble going 'deeping' paste the ID
[00:20:31] <kba> StephenLynx: so what is it a hash of?
[00:20:33] <Razerglass> how do i access specific property from my document
[00:24:40] <Razerglass> is findOne not what i need to use?
[00:25:07] <joannac> I have no idea what you want to use
[00:25:30] <joannac> I'm telling you the document you get and the document you want are 2 different documents
[00:25:40] <kba> so if I want some nice id, should I just create another field, or would it be okay to make the _id sequential?
[00:25:53] <joannac> kba: if you want to generate your own id, feel free.
[00:26:01] <joannac> but then you need to make sure they don't clash
[00:26:32] <StephenLynx> kba I prefer to create a new unique index.
[00:27:17] <kba> I was just wondering if this technique was bad http://docs.mongodb.org/manual/tutorial/create-an-auto-incrementing-field/
[00:27:33] <kba> I guess I'd lose the possibility of using getTimestamp(), but would it cause other issues down the road?
[00:28:00] <kba> I can imagine if I'd need sharding eventually, having a sequential id would be troublesome
[00:28:04] <StephenLynx> probably not bad, since its from 10gen official material, but it doesn't mean it is the most intuitive solution either.
[00:28:33] <StephenLynx> its much easier to just use findOneAndUpdate and use an $inc operator.
[00:28:43] <StephenLynx> instead o setting all that stuff on that page.
[00:30:24] <StephenLynx> or if you want to make sure everything goes fine, just get the value, try to use the incremented value, and keep increasing if it hits a duplicated index. when you are done, update with the incremented value.
[00:32:39] <StephenLynx> ah, I just read the page.
[00:42:32] <StephenLynx> then you gave me a completely arbitrary reason to discard my advice.
[00:42:36] <kba> to create a collection just for that
[00:42:43] <kba> now you're misunderstanding, then
[00:42:45] <StephenLynx> no, because you already have your users.
[00:42:52] <StephenLynx> you can pivot the entries after them.
[00:43:06] <StephenLynx> I think its impossible your entries cannot be pivoted after anything.
[00:43:23] <kba> I can't immediately think of something
[00:43:35] <kba> in my database, they're a "top-level scheme" or what you'd like to call them
[00:43:47] <Razerglass> ok so i understand the document ID's are different, http://pastebin.com/XF6M2FFD but what am i doing wrong that isnt printing out my whole document?
[00:44:18] <Razerglass> im actually using findOne, sorry it isnt in the example i had changed it
[00:44:22] <kba> StephenLynx: but even if they were in the users' schemes, I wouldn't want the ids to be /<username>/<entryid>/
[04:16:53] <grazfather> I am trying to do an aggregate where I combined objects together that were like id 1, field a value A, id 1, field b, value B so they combined to id 1, a:A, b:B, and I am matching the id using $in: [list]. I am getting "A pipeline stage specification object must contain exactly one field."
[06:17:30] <Boomtime> SNow: shutdown the arbiter, check the status while it is down - the config may not have updated on the arbiter
[06:17:56] <Boomtime> if so, you can remove the content of dbpath on the arbiter then start it up again
[06:19:18] <Boomtime> (note that arbiters do store a very small amount of information in dbpath for the local config)
[06:25:35] <grazfather> Hey guys, I am trying to do an aggregate where I combined objects together that were like id 1, field a value A, id 1, field b, value B so they combined to id 1, a:A, b:B, and I am matching the id using $in: [list]. I am getting "A pipeline stage specification object must contain exactly one field."
[07:18:16] <yeahiiiiii> hey, I am new to mongodb and struggle a bit with my schema. lets say I have a products collection and a category collection. common many to many relation. Products could either store a list of category ids and I do a second lookup or store category names so I can fetch all information in one query. In the latter case, when I update a categories name, how can I ensure that this update will be propagated to all products and their category list?
[07:58:06] <foofoobar> Hi. I have this aggregate to group my „pageviews“ and display two values: hits and uniques per day. This works as it should: http://hastebin.com/avayizuzad.js
[07:58:38] <foofoobar> As you can see I also do an $avg on the $onPage time, which works, but instead of the average I want to calculate the median. Because this is not possible with mongodb I want to do this in javascript.
[07:59:05] <foofoobar> To do this I want to strip the $avg and just output all onPage times like this: [5, 1, 3, 5, 1]
[07:59:40] <foofoobar> I tried it this way: http://hastebin.com/nomonudadu.js
[08:00:11] <foofoobar> But then average_time is an empty list. The first $group in my pipeline works as it should, all values are combines in a list
[08:00:17] <foofoobar> But the second does not. Where is my error?
[08:04:40] <Boomtime> foofoobar: have you tried running it in the shell?
[08:04:59] <foofoobar> Boomtime: Why should this change anything?
[08:29:53] <techie28> I am developing a small webservice which takes an array of JSON objects and in each of them there is an index called "ACTION" which will tell the service to add/edit the data depending on its value.
[08:30:23] <techie28> I will have to make one call for ADDING and one for updating.
[08:31:00] <techie28> I was wondering that I could use "Multi' and "Upsert" here to do that everything in one call only?
[08:44:23] <koren> foofoobar: no sorry I'm realy not confident with aggregate yet but I think you can't use $each on $addToSet into it, not sure
[08:44:25] <valera> hello. my mapreduce functions fail with utf-8 errors (JS Error: TypeError: malformed UTF-8 character sequence...). is there a way to locate documents with malformed utf-8 ? exception did not give an ObjectId
[08:44:42] <koren> techie28: take a look at bulk upsert: http://docs.mongodb.org/manual/reference/method/db.collection.initializeUnorderedBulkOp/#db.collection.initializeUnorderedBulkOp prepare your queries and execute them in one call
[08:46:47] <foofoobar> koren: it should be possible: http://docs.mongodb.org/manual/reference/operator/update/each/
[08:47:50] <koren> they provides only exemples with update, and when I look at $addToSet in aggregation framework, they say it support only accumulator expressions: http://docs.mongodb.org/manual/reference/operator/aggregation/#aggregation-accumulator-operators
[08:55:11] <techie28> koren: how about this db.getCollection('users').update({'email':'abc@mail.com'},{'firstName':'name1'},{"multi" : true,"upsert" : false});
[08:55:20] <foofoobar> I think I will use this and then flatten „by hand“ in javascript when I got the result
[08:55:52] <techie28> should this call not update the firstName if that email was found else add new?
[08:57:22] <techie28> db.getCollection('users').update({'email':'abc@mail.com'},{'firstName':'name1'},{"multi" : false,"upsert" : true});... im sorry this way I mean
[08:57:46] <techie28> setting upsert to true? is it not a good idea?
[09:16:39] <techie28> koren: yes it did.. thanks a lot. :)
[09:17:45] <techie28> koren: I am updating one value only here.
[09:18:36] <techie28> koren: If that query criteria doesnt match.it creates a new doc with that name and the queried email..it is impossible to have all other fields there?
[09:22:11] <koren> it is possible yes, you want them empty ?
[09:23:34] <koren> but it's not usefull with mongo
[09:23:58] <koren> mongodb is made in a way that if you dont have the field, just not set it and it will not break
[09:24:16] <koren> and if you want to query for field existing, just use $exists
[09:24:27] <koren> you don't have to set fields to null or "" like in sql
[09:25:33] <koren> it is possible but it's bad practice imo
[09:25:39] <techie28> like there are fileds in the table createdISO,updatedISO etc which have timestamps in them.. in case of upsert TRUE they are not present in the created doc
[09:25:55] <koren> that's why you have $setOnInsert
[10:13:59] <makinen> can the authentication be enabled only for the remote connections?
[10:23:09] <hrusti> I am staring to develop web application which will have semi-big datasets and many relations and I cant decide between noSql and sql. I am thinking about mysql and mongo. I have read that nosql databases are not good with many relationship. What do you suggest?
[10:26:43] <vilcoyot> Hi, I would like to install mongodb 3.x on debian jessie. But the repo is only providing wheezy packages...
[10:26:54] <vilcoyot> Is there a plan to deliver them?
[10:27:00] <Derick> they will install on jessie though
[10:30:38] <rasputnik> check my thinking here: devs. have a one off query they want to do and without an index they're going to take a while, so have asked me to create an index.
[10:30:43] <rasputnik> but this is a one-off query
[10:31:05] <rasputnik> so won't it be a waste of time? the work to create the index will be as much as the work to do the query, right?
[10:31:36] <rasputnik> plus we have the cost of the indexed query, which is unlikely to be 'free'. and the maintenance overhead of that index for all writes.
[14:25:20] <deathanchor> pamp: sounds like it was the chunksplitter probably running on that collection or DB.
[14:45:48] <cpennington> I've got a mongo instance (on compose.io) where reading a 1.6mb document takes ~3 seconds (using the commandline client) and similar from the python client
[14:46:33] <cpennington> I'm looking now into how to shrink the size of the document (remove data that we shouldn't be storing anyway, and minimizing document keys), but the time to load the data seems odd to me
[14:48:43] <cheeser> how much of that is network IO?
[14:50:17] <makinen> cheeser: probably not the server change will be done at night
[15:06:03] <cpennington> cheeser: not much, I don't think. is there a way to get that measurement in the js cli client?
[16:23:44] <koren> you may be more interested in sharded clusters to balance load but it is a bit more complicated than replica set (even if mms makes it quite easy)
[16:31:38] <svm_invictvs> I'm curious how do I renamne afile in GridFS?
[16:31:51] <svm_invictvs> Is it as simple as theFile.set("name", "newName") and saving it?
[19:42:19] <UnNaturalHigh> Is anyone here aware if its possible to use apache spark with mongodb without hadoop? I noticed the hadoop-connector but it isn't clear whether hadoop is also a requirement?
[19:45:14] <StephenLynx> you will have to check with the spark guys
[19:47:08] <cheeser> UnNaturalHigh: currently no but that's being investigated
[19:50:54] <saml> if you're using mongod and hadoop and spark.. you're really big data
[20:07:58] <f31n> hi i'm new to the nosql idea i'm used to work with relational databases. my question is is that a propper schema or is there a way to make it better: http://pastebin.com/twzRAz3C
[20:12:09] <StephenLynx> it seems to be some kind of mistake there
[20:12:28] <StephenLynx> at first I thought you nested all that, but then I noticed _id in the last depth
[20:20:32] <StephenLynx> that would be extremely odd.
[20:20:33] <Streemo> at high volume, my data checking reveals that the score.calculated fields of some documents do not match generateScore(score.up, score.total)
[20:45:43] <StephenLynx> it isn't much different than canine genealogy.
[20:47:49] <f31n> true, but there is and will always be something you can tell about your family, for your children for example ... i loved hearing the family tales of my family names