[02:00:32] <pengwn> I would like some info on whether to use record references or record embedding on shard instances which will give better performance?
[07:55:15] <Mothership> is it possible in mongo.config to add more than one path to db files?
[08:37:04] <rAg3nix> hi , i am new to mongodb, i am trying to import mysql database using mongoimport. but there is a limitation of 16mb whereas my db size in json is 350 mb. can anyone please tell me how to do it ?
[08:42:44] <joannac> rAg3nix: the maximum document size is 16mb
[08:43:10] <joannac> surely you're not storing the whole thing as one document...
[08:43:30] <joannac> Mothership: no. what's the use case?
[08:44:28] <rAg3nix> i am trying to do that i guess, and its not happening for obvious reasons !! how do i do it ? i tried using gridfs !! it stores the whole file how do i run query in that?
[08:45:38] <Mothership> joannac, I want to commit my project to git with db files included, so when I pull it, I can automatically get the updated db files without the need to copy them to the main db path
[08:46:25] <Mothership> I know I can use bashscript, but I just thought if it's possible to add more db paths
[08:49:45] <joannac> Mothership: umm, how big is your database? I can't see that scaling well
[08:50:40] <joannac> rAg3nix: you don't? import your rows as documents or something. you're going to get terrible performance if you store your whole sql database as a mongodb gridfs file...
[08:52:47] <Mothership> joannac, its just for pre-production
[08:52:52] <rAg3nix> joannac: no no , what i am trying to do is , i have the base database , and there are 3 tables that i need to migrate to mongodb on a master slave replica first table was done , second the one i am trying to do is 350 mb and is refusing to go as a single document. how do i resolve it or how do i do it ?
[08:57:47] <joannac> rAg3nix: you have a single table which is >350mb which you are trying to insert as a single mongodb document
[09:26:13] <Nodex> the8thbit : I would think so yes
[09:28:45] <the8thbit> Nodex: Thanks. If I want to do a query from my client, what would be the easiest way to do this? Just set up a route for a .find() on the particular collection, and then use ajax to pass it a query?
[09:32:42] <amitprakash> Hi, given an _id, is it possible for me to look for a document across multiple collections matching _id
[09:36:39] <Nodex> the8thbit : tht's down to your app
[09:45:13] <the8thbit> In this particular case, I'm not querying private info
[09:45:32] <amitprakash> In general this is a bad idea
[09:45:41] <Nodex> it's not really the point. Build and follow good practices
[09:45:44] <amitprakash> Though I do not have the context for what you guys are discussing
[09:46:01] <the8thbit> Nodex, amitprakash: Well, I'm looking for good practices :)
[09:48:10] <the8thbit> amtiprakash: Basically, something similar to a reddit profile, where I have a list of comments made by all users, and I need to list just the ones made by the user who owns the profile
[09:50:13] <the8thbit> I have authentication, so for private info, I could just make sure that the user is authenticated using the same username as they make the query for?
[09:50:57] <amitprakash> Why does a user make a query at all
[09:51:05] <amitprakash> Queries should be server side
[09:51:31] <amitprakash> In general, do not expose your db to the public
[09:52:03] <amitprakash> Nodex, so theres nothing short of cycling though multiple collections and looking for the _id
[09:52:25] <amitprakash> Nodex, also, can two documents share the same _id when they lie in different collections
[09:55:53] <the8thbit> amitprakash: We're talking about the same thing, yes? I set up a route. say /mongo/userComments that takes a string or two as a query. I then have a mongoose query server-side that takes the string I sent the route and does the actual querying
[09:57:44] <Nodex> amitprakash : I am not sure on that one, I don't see why not
[11:00:17] <amitprakash> the8thbit, you're talking about searching on a string/
[11:25:47] <morfin> which is being calculated when i add new document to simplify search when i got lots of documents and need to search by that criteria
[12:25:54] <hjb> Derick: well, ok. php is no option for us. i guess the java client should also work?
[12:26:14] <Derick> it would surprise me if it didn't, but I can't give you an authorative answer
[12:33:19] <hjb> Derick: ok, np. i'll just check it
[12:37:18] <traplin> Meteor.users.find({"services.facebook.name":friendName}}).fetch();, with this statement how would i exclude any documents that the field "_id" equals "personID"
[12:38:29] <Derick> {"services.facebook.name":friendName, '_id' : { '$ne' : personID }} ought to do it
[12:39:50] <traplin> ah thanks Derick ! that works perfectly
[12:40:02] <traplin> was getting confused whether to use $ne / $not and where to put them
[12:41:23] <hjb> Derick: but for the "standard" command line client mongo there's little chance, right?
[13:07:18] <katspaugh> Hi! What is the recommended way to handle connections to MongoDB, when using mongodb-native driver?
[13:08:00] <katspaugh> Should I open a single connection and reuse it throughout my app, or open new connections to the DB each time I need to query or insert?
[13:08:57] <katspaugh> Right now, I’m reusing a single connection, never closing it. I experience hangings on bulk inserts and querying collections.
[13:13:18] <Nodex> I open one at the start and don't have a problem with throughput
[13:18:46] <katspaugh> Nodex: thanks! It must be something in my app, then.
[13:21:18] <LoneSoldier728> am i calling populate wrong http://pastebin.com/S2QmZwPv it does not grab the results even though they are there
[14:28:12] <cfinley> Could anyone explain to me why the reduce method needs to output a value that can run through reduce again and output the same value?
[14:31:34] <kali> cfinley: it enables to reduce in several stages. so you can have a first reduce step on each shard of your sharded database, then one on a unique node to finish the work. it also allows to avoid reduce steps that would require an enormous values array, sometimes exceeding the memory
[15:00:45] <cfinley> If the map/reduce is running on one machine should reduce run multiple times?
[15:05:07] <G1eb> Hello, is it possible to have an array with max length, meaning that when I push something new to it, it automatically deletes oldes item and adds the new one
[15:05:24] <kali> i think the behaviour is to reduce by chunk of 1000 docs, so yes, it can. you must not make assumptions here anyway
[15:05:44] <G1eb> oh wait I think I found somehing like a slice param on update
[15:05:46] <kali> cfinley: also, be aware the reduce may not be called if the values[] has just one element
[15:06:02] <kali> G1eb: yep, $push with $sort and $slice
[15:06:24] <cfinley> kali: I did notice that. Does that mean any statistics should be done in the finalize?
[15:06:25] <G1eb> awesome, I knew mongo had something automatic for this
[15:06:46] <kali> G1eb: it's quite recent, to be honest :)
[15:06:52] <cfinley> kali: I'm writing a method that takes user events and generates user sessions for analytics.
[15:08:01] <G1eb> kali, great functionality imo, so practical sometimes
[15:19:20] <G1eb> because of that nss 2014 meeting
[15:19:28] <kali> Nodex: it's actually not that clear. they have officialy decided that the debris in the south indian ocean have to be it, but i don't think there is an actual physical proof yet :)
[15:21:00] <kali> but it sound like you could nearly see it from perth
[15:26:33] <hipsterslapfight> it's flat enough you can see for miles kali! :v
[15:26:38] <hipsterslapfight> (i don't miss living in perth)
[16:13:18] <G1eb> kali, Im trying this $push $sort and $slice approach but keep getting: RangeError: Maximum call stack size exceeded =/
[16:13:25] <G1eb> any ideas what can possibly trigger it?
[16:22:37] <viktor_> How do I make a query on several _id's and get the result as an array with documents <in the same order> as my list of _id's? I've tried with $or and $in ...
[16:25:34] <viktor_> rkgarcia: How do I use sort in this case? I only know how to sort by some key.
[16:33:03] <riceo> hi everyone. quick sharing implementation question. are there any best practices around config server placement? I was thinking of running the config servers on my shard servers
[16:43:37] <cheeser> i have the same combination on my luggage!
[16:44:26] <rkgarcia> viktor_: you need some code then :)
[16:45:28] <viktor_> rkgarcia: hehe yes, it feels like this is something that many people will want to do, so I'm wondering if there's a recommended way of doing it. I dont see an obvious and nice way of doing it.
[16:57:34] <the8thbit> How would query for info in a member of an array? Also, is there any easy way to query for data in _every_ member of an array?
[16:59:01] <rkgarcia> the8thbit: sounds like elemMatch operator
[17:35:57] <ShortWave> Is there a way to append to a array in a subdocument? I can't seem to get that to work right.
[17:37:55] <kali> ShortWave: show us what you are trying
[17:38:44] <ShortWave> Ah, I'm in the middle of a discussion about how to handle this part of the data model, one second. The exact use case here is in some doubt, apparently.
[17:38:58] <rkgarcia> ShortWave: $set with "subdocument.value" ?
[17:39:32] <ShortWave> Wasn't working, but now I think it was a criteria problem.
[17:39:49] <ShortWave> I tested it bare on a new collection.
[18:12:28] <faef> if I want a hidden secondary to do a fresh initial sync, I delete all the data in dbpath: does this include the local db?
[18:15:10] <decompiled> faef, you can. how large of dataset do you have?
[18:15:30] <decompiled> resync'ing for anything sort of large sucks
[18:16:04] <faef> 1TB, it's annoying but it's the only way to get a compacted database without downtime
[18:16:43] <faef> we just deleted a bunch of indices on the primary, so we want to reclaim that disk space. choice is either repairdatabase directly on the secondary or just delete the data and resync
[18:18:15] <kali> faef: yes, delete the local database too
[18:18:28] <kali> faef: literally remove everything from the dbpath directory
[18:19:07] <decompiled> have you done this before?
[18:19:28] <kali> decompiled: me ? yeah. dozens of times
[18:57:27] <Honeyman> Hello. If I have 4 equal hosts, what is the best idea for a replica set configuration to utilize them fully?
[18:57:34] <Honeyman> The docs say if I have an even number (4) of replica set hosts, I should run the arbiter, but it shouldn't be executed on the same host as any of the rs hosts, so that calls for the fifth host...
[18:57:40] <Honeyman> Am I right that the best I can do is "1 primary, 2 regular secondaries, 1 non-voting secondary"?
[19:03:02] <G1eb> in the docs on http://docs.mongodb.org/manual/reference/operator/update/push/#up._S_push they use such a wierd notation for push with slice at the bottom
[19:03:26] <G1eb> i cant seem to work around that $each as I only need to push 1 item
[22:47:21] <alexi5> based on your experience what advantages does this type of database offer you over a relational database ?
[22:55:46] <scottyob> G'Day everyone. Do you guys help with minimongo too?
[23:02:30] <LoneSoldier728> hey so I am getting back this from populate... http://pastebin.com/sigfqXKu and trying to call results.stories say is undefined... even results.stories[0] says cant get 0 of undefined ... and also, there should be two objects in there not one... what am i doing wrong ?
[23:03:31] <ShortWave> So just out of curiousity...why is it "ensureIndex"? is that just some functional naming with more meaning than normal?
[23:03:32] <LoneSoldier728> anyone have a clue about had it with mongodb today
[23:06:07] <ShortWave> I'll give it a look, but I'm no expoert
[23:06:33] <ShortWave> results[0].stories[0] I think
[23:06:43] <ShortWave> Results is presented as an array
[23:06:58] <ShortWave> Unless I'm reading that returned structure wrong.
[23:07:11] <ShortWave> but it looks like it ends in ] to me
[23:07:41] <ShortWave> let me know if that solves it...or not. I'll look deeper at it if you still have a problem